gem_id
stringlengths 37
41
| paper_id
stringlengths 3
4
| paper_title
stringlengths 19
183
| paper_abstract
stringlengths 168
1.38k
| paper_content
sequence | paper_headers
sequence | slide_id
stringlengths 37
41
| slide_title
stringlengths 2
85
| slide_content_text
stringlengths 11
2.55k
| target
stringlengths 11
2.55k
| references
list |
---|---|---|---|---|---|---|---|---|---|---|
GEM-SciDuet-train-25#paper-1026#slide-6 | 1026 | SemEval-2019 Task 1: Cross-lingual Semantic Parsing with UCCA | We present the SemEval 2019 shared task on Universal Conceptual Cognitive Annotation (UCCA) parsing in English, German and French, and discuss the participating systems and results. UCCA is a crosslinguistically applicable framework for semantic representation, which builds on extensive typological work and supports rapid annotation. UCCA poses a challenge for existing parsing techniques, as it exhibits reentrancy (resulting in DAG structures), discontinuous structures and non-terminal nodes corresponding to complex semantic units. The shared task has yielded improvements over the state-of-the-art baseline in all languages and settings. Full results can be found in the task's website https://competitions. codalab.org/competitions/19160. 10 http://spacy.io 11 http://fasttext.cc | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159
],
"paper_content_text": [
"Overview Semantic representation is receiving growing attention in NLP in the past few years, and many proposals for semantic schemes have recently been put forth.",
"Examples include Abstract Meaning Representation (AMR; Banarescu et al., 2013) , Broad-coverage Semantic Dependencies (SDP; Oepen et al., 2016) , Universal Decompositional Semantics (UDS; White et al., 2016) , Parallel Meaning Bank (Abzianidze et al., 2017) , and Universal Conceptual Cognitive Annotation (UCCA; Abend and Rappoport, 2013) .",
"These advances in semantic representation, along with corresponding advances in semantic parsing, can potentially benefit essentially all text understanding tasks, and have already demonstrated applicability to a variety of tasks, including summarization (Liu et al., 2015; Dohare and Karnick, 2017) , paraphrase detection (Issa et al., 2018) , and semantic evaluation (using UCCA; see below).",
"In this shared task, we focus on UCCA parsing in multiple languages.",
"One of our goals is to benefit semantic parsing in languages with less annotated resources by making use of data from more resource-rich languages.",
"We refer to this approach as cross-lingual parsing, while other works (Zhang et al., 2017 (Zhang et al., , 2018 define cross-lingual parsing as the task of parsing text in one language to meaning representation in another language.",
"In addition to its potential applicative value, work on semantic parsing poses interesting algorithmic and modeling challenges, which are often different from those tackled in syntactic parsing, including reentrancy (e.g., for sharing arguments across predicates), and the modeling of the interface with lexical semantics.",
"UCCA is a cross-linguistically applicable semantic representation scheme, building on the established Basic Linguistic Theory typological framework (Dixon, 2010b (Dixon, ,a, 2012 .",
"It has demonstrated applicability to multiple languages, including English, French and German, and pilot annotation projects were conducted on a few languages more.",
"UCCA structures have been shown to be well-preserved in translation (Sulem et al., 2015) , and to support rapid annotation by nonexperts, assisted by an accessible annotation interface .",
"1 UCCA has already shown applicative value for text simplifica- Scene Elements P Process The main relation of a Scene that evolves in time (usually an action or movement).",
"S State The main relation of a Scene that does not evolve in time.",
"A Participant Scene participant (including locations, abstract entities and Scenes serving as arguments).",
"D Adverbial A secondary relation in a Scene.",
"Elements of Non-Scene Units C Center Necessary for the conceptualization of the parent unit.",
"E Elaborator A non-Scene relation applying to a single Center.",
"N Connector A non-Scene relation applying to two or more Centers, highlighting a common feature.",
"R Relator All other types of non-Scene relations: (1) Rs that relate a C to some super-ordinate relation, and (2) Rs that relate two Cs pertaining to different aspects of the parent unit.",
"Inter-Scene Relations H Parallel Scene A Scene linked to other Scenes by regular linkage (e.g., temporal, logical, purposive).",
"L Linker A relation between two or more Hs (e.g., \"when\", \"if\", \"in order to\").",
"G Ground A relation between the speech event and the uttered Scene (e.g., \"surprisingly\").",
"Other F Function Does not introduce a relation or participant.",
"Required by some structural pattern.",
"tion (Sulem et al., 2018b) , as well as for defining semantic evaluation measures for text-to-text generation tasks, including machine translation (Birch et al., 2016) , text simplification (Sulem et al., 2018a) and grammatical error correction (Choshen and Abend, 2018) .",
"The shared task defines a number of tracks, based on the different corpora and the availability of external resources (see §5).",
"It received submissions from eight research groups around the world.",
"In all settings at least one of the submitted systems improved over the state-of-the-art TUPA parser (Hershcovich et al., 2017 (Hershcovich et al., , 2018 , used as a baseline.",
"Task Definition UCCA represents the semantics of linguistic utterances as directed acyclic graphs (DAGs), where terminal (childless) nodes correspond to the text tokens, and non-terminal nodes to semantic units that participate in some super-ordinate relation.",
"Edges are labeled, indicating the role of a child in the relation the parent represents.",
"Nodes and edges belong to one of several layers, each corresponding to a \"module\" of semantic distinctions.",
"UCCA's foundational layer covers the predicate-argument structure evoked by predicates of all grammatical categories (verbal, nominal, adjectival and others), the inter-relations between them, and other major linguistic phenomena such as semantic heads and multi-word expressions.",
"It is the only layer for which annotated corpora exist at the moment, and is thus the target of this shared task.",
"The layer's basic notion is the Scene, describing a state, action, movement or some other relation that evolves in time.",
"Each Scene contains one main relation (marked as either a Process or a State), as well as one or more Participants.",
"For example, the sentence \"After graduation, John moved to Paris\" (Figure 1 ) contains two Scenes, whose main relations are \"graduation\" and \"moved\".",
"\"John\" is a Participant in both Scenes, while \"Paris\" only in the latter.",
"Further categories account for inter-Scene relations and the internal structure of complex arguments and relations (e.g., coordination and multi-word expressions).",
"Table 1 provides a concise description of the categories used by the UCCA foundational layer.",
"UCCA distinguishes primary edges, corresponding to explicit relations, from remote edges (appear dashed in Figure 1 ) that allow for a unit to participate in several super-ordinate relations.",
"Primary edges form a tree in each layer, whereas remote edges enable reentrancy, forming a DAG.",
"UCCA graphs may contain implicit units with no correspondent in the text.",
"Figure 2 shows the annotation for the sentence \"A similar technique is almost impossible to apply to other crops, such as cotton, soybeans and rice.\"",
"2 It includes a single Scene, whose main relation is \"apply\", a secondary relation \"almost impossible\", as well as two complex arguments: \"a similar technique\" and the coordinated argument \"such as cotton, soybeans, and rice.\"",
"In addition, the Scene includes an implicit argument, which represents the agent of the \"apply\" relation.",
"While parsing technology is well-established for syntactic parsing, UCCA has several formal properties that distinguish it from syntactic representations, mostly UCCA's tendency to abstract away from syntactic detail that do not affect argument structure.",
"For instance, consider the following examples where the concept of a Scene has a different rationale from the syntactic concept of a clause.",
"First, non-verbal predicates in UCCA are represented like verbal ones, such as when they appear in copula clauses or noun phrases.",
"Indeed, in Figure 1 , \"graduation\" and \"moved\" are considered separate Scenes, despite appearing in the same clause.",
"Second, in the same example, \"John\" is marked as a (remote) Participant in the graduation Scene, despite not being explicitly mentioned.",
"Third, consider the possessive construction in \"John's trip home\".",
"While in UCCA \"trip\" evokes a Scene in which \"John\" is a Participant, a syntactic scheme would analyze this phrase similarly to \"John's shoes\".",
"The differences in the challenges posed by syntactic parsing and UCCA parsing, and more generally by semantic parsing, motivate the development of targeted parsing technology to tackle it.",
"Data & Resources All UCCA corpora are freely available.",
"3 For English, we use v1.2.3 of the Wikipedia UCCA corpus (Wiki), v1.2.2 of the UCCA Twenty Thousand Leagues Under the Sea English-French parallel corpus (20K), which includes UCCA manual annotation for the first five chapters in French and English, and v1.0.1 of the UCCA German Twenty 3 https://github.com/ UniversalConceptualCognitiveAnnotation Thousand Leagues Under the Sea corpus, which includes the entire book in German.",
"For consistent annotation, we replace any Time and Quantifier labels with Adverbial and Elaborator in these data sets.",
"The resulting training, development 4 and test sets 5 are publicly available, and the splits are given in Table 2 .",
"Statistics on various structural properties are given in Table 3 .",
"The corpora were manually annotated according to v1.2 of the UCCA guidelines, 6 and reviewed by a second annotator.",
"All data was passed through automatic validation and normalization scripts.",
"7 The goal of validation is to rule out cases that are inconsistent with the UCCA annotation guidelines.",
"For example, a Scene, defined by the presence of a Process or a State, should include at least one Participant.",
"Due to the small amount of annotated data available for French, we only provided a minimal training set of 15 sentences, in addition to the development and test set.",
"Systems for French were expected to pursue semi-supervised approaches, such as cross-lingual learning or structure projection, leveraging the parallel nature of the corpus, or to rely on datasets for related formalisms, such as Universal Dependencies (Nivre et al., 2016) .",
"The full unannotated 20K Leagues corpus in English and French was released as well, in order to facilitate pursuing cross-lingual approaches.",
"Datasets were released in an XML format, including tokenized text automatically pre- processed using spaCy (see §5), and gold-standard UCCA annotation for the train and development sets.",
"8 To facilitate the use of existing NLP tools, we also released the data in SDP, AMR, CoNLL-U and plain text formats.",
"TUPA: The Baseline Parser We use the TUPA parser, the only parser for UCCA at the time the task was announced, as a baseline (Hershcovich et al., 2017 (Hershcovich et al., , 2018 .",
"TUPA is a transition-based DAG parser based on a BiLSTM-based classifier.",
"9 TUPA in itself has been found superior to a number of conversionbased parsers that use existing parsers for other formalisms to parse UCCA by constructing a twoway conversion protocol between the formalisms.",
"It can thus be regarded as a strong baseline for sys-8 https://github.com/ UniversalConceptualCognitiveAnnotation/ docs/blob/master/FORMAT.md 9 https://github.com/huji-nlp/tupa tem submissions to the shared task.",
"Evaluation Tracks.",
"Participants in the task were evaluated in four settings: In order to allow both even ground comparison between systems and using hitherto untried resources, we held both an open and a closed track for submissions in the English and German settings.",
"Closed track submissions were only allowed to use the gold-standard UCCA annotation distributed for the task in the target language, and were limited in their use of additional resources.",
"Concretely, the only additional data they were allowed to use is that used by TUPA, which consists of automatic annotations provided by spaCy: 10 POS tags, syntactic dependency relations, and named entity types and spans.",
"In addition, the closed track only allowed the use of word embeddings provided by fastText (Bojanowski et al., 2017 ) 11 for all languages.",
"Systems in the open track, on the other hand, were allowed to use any additional resource, such as UCCA annotation in other languages, dictionaries or datasets for other tasks, provided that they make sure not to use any additional gold standard annotation over the same text used in the UCCA corpora.",
"12 In both tracks, we required that submitted systems are not trained on the development data.",
"We only held an open track for French, due to the paucity of training data.",
"The four settings and two tracks result in a total of 7 competitions.",
"Scoring.",
"The following scores an output graph G 1 = (V 1 , E 1 ) against a gold one, G 2 = (V 2 , E 2 ), over the same sequence of terminals (tokens) W .",
"For a node v in V 1 or V 2 , define yield(v) ⊆ W as is its set of terminal descendants.",
"A pair of edges (v 1 , u 1 ) ∈ E 1 and (v 2 , u 2 ) ∈ E 2 with labels (categories) 1 , 2 is matching if yield(u 1 ) = yield(u 2 ) and 1 = 2 .",
"Labeled precision and recall are defined by dividing the number of matching edges in G 1 and G 2 by |E 1 | and |E 2 |, respectively.",
"F 1 is their harmonic mean: · Precision · Recall Precision + Recall Unlabeled precision, recall and F 1 are the same, but without requiring that 1 = 2 for the edges to match.",
"We evaluate these measures for primary and remote edges separately.",
"For a more finegrained evaluation, we additionally report precision, recall and F 1 on edges of each category.",
"13 Participating Systems We received a total of eight submissions to the different tracks: MaskParse@Deskiñ 12 We are not aware of any such annotation, but include this restriction for completeness.",
"13 The official evaluation script providing both coarse-grained and fine-grained scores can be found in https://github.com/huji-nlp/ucca/blob/ master/scripts/evaluate_standard.py.",
"14 It was later discovered that CUNY-PekingU used some of the evaluation data for training in the open tracks, and they were thus disqualified from these tracks.",
"In terms of parsing approaches, the task was quite varied.",
"HLT@SUDA converted UCCA graphs to constituency trees and trained a constituency parser and a recovery mechanism of remote edges in a multi-task framework.",
"MaskParse@Deskiñ used a bidirectional GRU tagger with a masking mechanism.",
"Tüpa and XLangMo used a transition-based approach.",
"UC Davis used an encoder-decoder architecture.",
"GCN-SEM uses a BiLSTM model to predict Semantic Dependency Parsing tags, when the syntactic dependency tree is given in the input.",
"CUNY-PKU is based on an ensemble that includes different variations of the TUPA parser.",
"DAN-GNT@UIT.VNU-HCM converted syntactic dependency trees to UCCA graphs.",
"Different systems handled remote edges differently.",
"DANGNT@UIT.VNU-HCM and GCN-SEM ignored remote edges.",
"UC Davis used a different BiLSTM for remote edges.",
"HLT@SUDA marked remote edges when converting the graph to a constituency tree and trained a classification model for their recovery.",
"MaskParse@Deskiñ handles remote edges by detecting arguments that are outside of the parent's node span using a detection threshold on the output probabilities.",
"In terms of using the data, all teams but one used the UCCA XML format, two used the CoNLL-U format, which is derived by a lossy conversion process, and only one team found the other data formats helpful.",
"One of the teams (MaskParse@Deskiñ) built a new training data adapted to their model by repeating each sentence N times, N being the number of non-terminal nodes in the UCCA graphs.",
"Three of the teams adapted the baseline TUPA parser, or parts of it to form their parser, namely TüPa, CUNY-PekingU and XLangMo; HLT@SUDA used a constituency parser (Stern et al., 2017) as a component in their model; DANGNT@UIT.VNU-HCM is a rule-based system over the Stanford Parser, and the rest are newly constructed parsers.",
"All teams found it useful to use external resources beyond those provided by the Shared Task.",
"Four submissions used external embeddings, MUSE (Conneau et al., 2017) in the case of MaskParse@Deskiñ and XLangMo, ELMo (Peters et al., 2018) in the case of TüPa, 15 and BERT (Devlin et al., 2019) in the case of HLT@SUDA.",
"Other resources included additional unlabeled data (TüPa), a list of multi-word expressions (MaskParse@Deskiñ), and the Stanford parser in the case of DANGNT@UIT.VNU-HCM.",
"Only CUNY-PKU used the 20K unlabeled parallel data in English and French.",
"A common trend for many of the systems was the use of cross-lingual projection or transfer (MaskParse@Deskiñ, HLT@SUDA, TüPa, GCN-Sem, CUNY-PKU and XLangMo).",
"This was necessary for French, and was found helpful for German as well (CUNY-PKU).",
"Table 4 shows the labeled and unlabeled F1 for primary and remote edges, for each system in each track.",
"Overall F1 (All) is the F1 calculated over both primary and remote edges.",
"Full results are available online.",
"16 Figure 3 shows the fine-grained evaluation by labeled F1 per UCCA category, for each system in each track.",
"While Ground edges were uniformly 16 http://bit.ly/semeval2019task1results difficult to parse due to their sparsity in the training data, Relators were the easiest for all systems, as they are both common and predictable.",
"The Process/State distinction proved challenging, and most main relations were identified as the more common Process category.",
"The winning system in most tracks (HLT@SUDA) performed better on almost all categories.",
"Its largest advantage was on Parallel Scenes and Linkers, showing was especially successful at identifying Scene boundaries relative to the other systems, which requires a good understanding of syntax.",
"Results Discussion The HLT@SUDA system participated in all the tracks, obtaining the first place in the six English and German tracks and the second place in the French open track.",
"The system is based on the conversion of UCCA graphs into constituency trees, marking remote and discontinuous edges for recovery.",
"The classification-based recovery of the remote edges is performed simultaneously with the constituency parsing in a multi-task learning framework.",
"This work, which further connects between semantic and syntactic parsing, proposes a recovery mechanism that can be applied to other grammatical formalisms, enabling the conversion of a given formalism to another one for parsing.",
"The idea of this system is inspired by the pseudo non-projective dependency parsing approach proposed by Nivre and Nilsson (2005) .",
"The MaskParse@Deskiñ system only participated to the French open track, focusing on crosslingual parsing.",
"The system uses a semantic tagger, implemented with a bidirectional GRU and a masking mechanism to recursively extract the inner semantic structures in the graph.",
"Multilingual word embeddings are also used.",
"Using the English and German training data as well as the small French trial data for training, the parser ranked fourth in the French open track with a labeled F1 score of 65.4%, suggesting that this new model could be useful for low-resource languages.",
"The Tüpa system takes a transition-based approach, building on the TUPA transition system and oracle, but modifies its feature representations.",
"Specifically, instead of representing the parser configuration using LSTMs over the partially parsed graph, stack and buffer, they use feedforward networks with ELMo contextualized embeddings.",
"The stack and buffer are represented by the top three items on them.",
"For the partially parsed graph, they extract the rightmost and leftmost parents and children of the respective items, and represent them by the ELMo embedding of their form, the embedding of their dependency heads (for terminals, for non-terminals this is replaced with a learned embedding) and the embeddings of all terminal children.",
"Results are generally on-par with the TUPA baseline, and surpass it from the out-of-domain English setting.",
"This suggests that the TUPA architecture may be simplified, without compromising performance.",
"The UC Davis system participated only in the English closed track, where they achieved the second highest score, on par with TUPA.",
"The proposed parser has an encoder-decoder architecture, where the encoder is a simple BiLSTM encoder for each span of words.",
"The decoder iteratively and greedily traverses the sentence, and attempts to predict span boundaries.",
"The basic algorithm yields an unlabeled contiguous phrase-based tree, but additional modules predict the labels of the spans, discontiguous units (by joining together spans from the contiguous tree under a new node), and remote edges.",
"The work is inspired by Kitaev and Klein (2018) , who used similar methods for constituency parsing.",
"The GCN-SEM system uses a BiLSTM encoder, and predicts bi-lexical semantic dependencies (in the SDP format) using word, token and syntactic dependency parses.",
"The latter is incorporated into the network with a graph convolutional network (GCN).",
"The team participated in the English and German closed tracks, and were not among the highest-ranking teams.",
"However, scores on the UCCA test sets converted to the bi-lexical CoNLL-U format were rather high, implying that the lossy conversion could be much of the reason.",
"The CUNY-PKU system was based on an ensemble.",
"The ensemble included variations of TUPA parser, namely the MLP and BiLSTM models (Hershcovich et al., 2017) and the BiLSTM model with an additional MLP.",
"The system also proposes a way to aggregate the ensemble going through CKY parsing and accounting for remotes and discontinuous spans.",
"The team participated in all tracks, including additional information in the open domain, notably synthetic data based on automatically translating annotated texts.",
"Their system ranks first in the French open track.",
"The DANGNT@UIT.VNU-HCM system partic-ipated only in the English Wiki open and closed tracks.",
"The system is based on graph transformations from dependency trees into UCCA, using heuristics to create non-terminal nodes and map the dependency relations to UCCA categories.",
"The manual rules were developed based on the training and development data.",
"As the system converts trees to trees and does not add reentrancies, it does not produce remote edges.",
"While the results are not among the highest-ranking in the task, the primary labeled F1 score of 71.1% in the English open track shows that a rule-based system on top of a leading dependency parser (the Stanford parser) can obtain reasonable results for this task.",
"Conclusion The task has yielded substantial improvements to UCCA parsing in all settings.",
"Given that the best reported results were achieved with different parsing and learning approaches than the baseline model TUPA (which has been the only available parser for UCCA), the task opens a variety of paths for future improvement.",
"Cross-lingual transfer, which capitalizes on UCCA's tendency to be preserved in translation, was employed by a number of systems and has proven remarkably effective.",
"Indeed, the high scores obtained for French parsing in a low-resource setting suggest that high quality UCCA parsing can be straightforwardly extended to additional languages, with only a minimal amount of manual labor.",
"Moreover, given the conceptual similarity between the different semantic representations , it is likely the parsers developed for the shared task will directly contribute to the development of other semantic parsing technology.",
"Such a contribution is facilitated by the available conversion scripts available between UCCA and other formats."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"5",
"2",
"6",
"8",
"9"
],
"paper_header_content": [
"Overview",
"Task Definition",
"Data & Resources",
"TUPA: The Baseline Parser",
"Evaluation",
"·",
"Participating Systems",
"Discussion",
"Conclusion"
]
} | GEM-SciDuet-train-25#paper-1026#slide-6 | Conversion | graduate-01 name name op1 After
SDP ARG2 G1 head ARG2 head
ARG1 ARG1 ARG2 AR
root obl obl obl CoNLL-U punct nsubj hea d obl case hea d p unct case case hea d case nsubj After graduation , John moved to Paris After graduation John moved to Paris | graduate-01 name name op1 After
SDP ARG2 G1 head ARG2 head
ARG1 ARG1 ARG2 AR
root obl obl obl CoNLL-U punct nsubj hea d obl case hea d p unct case case hea d case nsubj After graduation , John moved to Paris After graduation John moved to Paris | [] |
GEM-SciDuet-train-25#paper-1026#slide-7 | 1026 | SemEval-2019 Task 1: Cross-lingual Semantic Parsing with UCCA | We present the SemEval 2019 shared task on Universal Conceptual Cognitive Annotation (UCCA) parsing in English, German and French, and discuss the participating systems and results. UCCA is a crosslinguistically applicable framework for semantic representation, which builds on extensive typological work and supports rapid annotation. UCCA poses a challenge for existing parsing techniques, as it exhibits reentrancy (resulting in DAG structures), discontinuous structures and non-terminal nodes corresponding to complex semantic units. The shared task has yielded improvements over the state-of-the-art baseline in all languages and settings. Full results can be found in the task's website https://competitions. codalab.org/competitions/19160. 10 http://spacy.io 11 http://fasttext.cc | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159
],
"paper_content_text": [
"Overview Semantic representation is receiving growing attention in NLP in the past few years, and many proposals for semantic schemes have recently been put forth.",
"Examples include Abstract Meaning Representation (AMR; Banarescu et al., 2013) , Broad-coverage Semantic Dependencies (SDP; Oepen et al., 2016) , Universal Decompositional Semantics (UDS; White et al., 2016) , Parallel Meaning Bank (Abzianidze et al., 2017) , and Universal Conceptual Cognitive Annotation (UCCA; Abend and Rappoport, 2013) .",
"These advances in semantic representation, along with corresponding advances in semantic parsing, can potentially benefit essentially all text understanding tasks, and have already demonstrated applicability to a variety of tasks, including summarization (Liu et al., 2015; Dohare and Karnick, 2017) , paraphrase detection (Issa et al., 2018) , and semantic evaluation (using UCCA; see below).",
"In this shared task, we focus on UCCA parsing in multiple languages.",
"One of our goals is to benefit semantic parsing in languages with less annotated resources by making use of data from more resource-rich languages.",
"We refer to this approach as cross-lingual parsing, while other works (Zhang et al., 2017 (Zhang et al., , 2018 define cross-lingual parsing as the task of parsing text in one language to meaning representation in another language.",
"In addition to its potential applicative value, work on semantic parsing poses interesting algorithmic and modeling challenges, which are often different from those tackled in syntactic parsing, including reentrancy (e.g., for sharing arguments across predicates), and the modeling of the interface with lexical semantics.",
"UCCA is a cross-linguistically applicable semantic representation scheme, building on the established Basic Linguistic Theory typological framework (Dixon, 2010b (Dixon, ,a, 2012 .",
"It has demonstrated applicability to multiple languages, including English, French and German, and pilot annotation projects were conducted on a few languages more.",
"UCCA structures have been shown to be well-preserved in translation (Sulem et al., 2015) , and to support rapid annotation by nonexperts, assisted by an accessible annotation interface .",
"1 UCCA has already shown applicative value for text simplifica- Scene Elements P Process The main relation of a Scene that evolves in time (usually an action or movement).",
"S State The main relation of a Scene that does not evolve in time.",
"A Participant Scene participant (including locations, abstract entities and Scenes serving as arguments).",
"D Adverbial A secondary relation in a Scene.",
"Elements of Non-Scene Units C Center Necessary for the conceptualization of the parent unit.",
"E Elaborator A non-Scene relation applying to a single Center.",
"N Connector A non-Scene relation applying to two or more Centers, highlighting a common feature.",
"R Relator All other types of non-Scene relations: (1) Rs that relate a C to some super-ordinate relation, and (2) Rs that relate two Cs pertaining to different aspects of the parent unit.",
"Inter-Scene Relations H Parallel Scene A Scene linked to other Scenes by regular linkage (e.g., temporal, logical, purposive).",
"L Linker A relation between two or more Hs (e.g., \"when\", \"if\", \"in order to\").",
"G Ground A relation between the speech event and the uttered Scene (e.g., \"surprisingly\").",
"Other F Function Does not introduce a relation or participant.",
"Required by some structural pattern.",
"tion (Sulem et al., 2018b) , as well as for defining semantic evaluation measures for text-to-text generation tasks, including machine translation (Birch et al., 2016) , text simplification (Sulem et al., 2018a) and grammatical error correction (Choshen and Abend, 2018) .",
"The shared task defines a number of tracks, based on the different corpora and the availability of external resources (see §5).",
"It received submissions from eight research groups around the world.",
"In all settings at least one of the submitted systems improved over the state-of-the-art TUPA parser (Hershcovich et al., 2017 (Hershcovich et al., , 2018 , used as a baseline.",
"Task Definition UCCA represents the semantics of linguistic utterances as directed acyclic graphs (DAGs), where terminal (childless) nodes correspond to the text tokens, and non-terminal nodes to semantic units that participate in some super-ordinate relation.",
"Edges are labeled, indicating the role of a child in the relation the parent represents.",
"Nodes and edges belong to one of several layers, each corresponding to a \"module\" of semantic distinctions.",
"UCCA's foundational layer covers the predicate-argument structure evoked by predicates of all grammatical categories (verbal, nominal, adjectival and others), the inter-relations between them, and other major linguistic phenomena such as semantic heads and multi-word expressions.",
"It is the only layer for which annotated corpora exist at the moment, and is thus the target of this shared task.",
"The layer's basic notion is the Scene, describing a state, action, movement or some other relation that evolves in time.",
"Each Scene contains one main relation (marked as either a Process or a State), as well as one or more Participants.",
"For example, the sentence \"After graduation, John moved to Paris\" (Figure 1 ) contains two Scenes, whose main relations are \"graduation\" and \"moved\".",
"\"John\" is a Participant in both Scenes, while \"Paris\" only in the latter.",
"Further categories account for inter-Scene relations and the internal structure of complex arguments and relations (e.g., coordination and multi-word expressions).",
"Table 1 provides a concise description of the categories used by the UCCA foundational layer.",
"UCCA distinguishes primary edges, corresponding to explicit relations, from remote edges (appear dashed in Figure 1 ) that allow for a unit to participate in several super-ordinate relations.",
"Primary edges form a tree in each layer, whereas remote edges enable reentrancy, forming a DAG.",
"UCCA graphs may contain implicit units with no correspondent in the text.",
"Figure 2 shows the annotation for the sentence \"A similar technique is almost impossible to apply to other crops, such as cotton, soybeans and rice.\"",
"2 It includes a single Scene, whose main relation is \"apply\", a secondary relation \"almost impossible\", as well as two complex arguments: \"a similar technique\" and the coordinated argument \"such as cotton, soybeans, and rice.\"",
"In addition, the Scene includes an implicit argument, which represents the agent of the \"apply\" relation.",
"While parsing technology is well-established for syntactic parsing, UCCA has several formal properties that distinguish it from syntactic representations, mostly UCCA's tendency to abstract away from syntactic detail that do not affect argument structure.",
"For instance, consider the following examples where the concept of a Scene has a different rationale from the syntactic concept of a clause.",
"First, non-verbal predicates in UCCA are represented like verbal ones, such as when they appear in copula clauses or noun phrases.",
"Indeed, in Figure 1 , \"graduation\" and \"moved\" are considered separate Scenes, despite appearing in the same clause.",
"Second, in the same example, \"John\" is marked as a (remote) Participant in the graduation Scene, despite not being explicitly mentioned.",
"Third, consider the possessive construction in \"John's trip home\".",
"While in UCCA \"trip\" evokes a Scene in which \"John\" is a Participant, a syntactic scheme would analyze this phrase similarly to \"John's shoes\".",
"The differences in the challenges posed by syntactic parsing and UCCA parsing, and more generally by semantic parsing, motivate the development of targeted parsing technology to tackle it.",
"Data & Resources All UCCA corpora are freely available.",
"3 For English, we use v1.2.3 of the Wikipedia UCCA corpus (Wiki), v1.2.2 of the UCCA Twenty Thousand Leagues Under the Sea English-French parallel corpus (20K), which includes UCCA manual annotation for the first five chapters in French and English, and v1.0.1 of the UCCA German Twenty 3 https://github.com/ UniversalConceptualCognitiveAnnotation Thousand Leagues Under the Sea corpus, which includes the entire book in German.",
"For consistent annotation, we replace any Time and Quantifier labels with Adverbial and Elaborator in these data sets.",
"The resulting training, development 4 and test sets 5 are publicly available, and the splits are given in Table 2 .",
"Statistics on various structural properties are given in Table 3 .",
"The corpora were manually annotated according to v1.2 of the UCCA guidelines, 6 and reviewed by a second annotator.",
"All data was passed through automatic validation and normalization scripts.",
"7 The goal of validation is to rule out cases that are inconsistent with the UCCA annotation guidelines.",
"For example, a Scene, defined by the presence of a Process or a State, should include at least one Participant.",
"Due to the small amount of annotated data available for French, we only provided a minimal training set of 15 sentences, in addition to the development and test set.",
"Systems for French were expected to pursue semi-supervised approaches, such as cross-lingual learning or structure projection, leveraging the parallel nature of the corpus, or to rely on datasets for related formalisms, such as Universal Dependencies (Nivre et al., 2016) .",
"The full unannotated 20K Leagues corpus in English and French was released as well, in order to facilitate pursuing cross-lingual approaches.",
"Datasets were released in an XML format, including tokenized text automatically pre- processed using spaCy (see §5), and gold-standard UCCA annotation for the train and development sets.",
"8 To facilitate the use of existing NLP tools, we also released the data in SDP, AMR, CoNLL-U and plain text formats.",
"TUPA: The Baseline Parser We use the TUPA parser, the only parser for UCCA at the time the task was announced, as a baseline (Hershcovich et al., 2017 (Hershcovich et al., , 2018 .",
"TUPA is a transition-based DAG parser based on a BiLSTM-based classifier.",
"9 TUPA in itself has been found superior to a number of conversionbased parsers that use existing parsers for other formalisms to parse UCCA by constructing a twoway conversion protocol between the formalisms.",
"It can thus be regarded as a strong baseline for sys-8 https://github.com/ UniversalConceptualCognitiveAnnotation/ docs/blob/master/FORMAT.md 9 https://github.com/huji-nlp/tupa tem submissions to the shared task.",
"Evaluation Tracks.",
"Participants in the task were evaluated in four settings: In order to allow both even ground comparison between systems and using hitherto untried resources, we held both an open and a closed track for submissions in the English and German settings.",
"Closed track submissions were only allowed to use the gold-standard UCCA annotation distributed for the task in the target language, and were limited in their use of additional resources.",
"Concretely, the only additional data they were allowed to use is that used by TUPA, which consists of automatic annotations provided by spaCy: 10 POS tags, syntactic dependency relations, and named entity types and spans.",
"In addition, the closed track only allowed the use of word embeddings provided by fastText (Bojanowski et al., 2017 ) 11 for all languages.",
"Systems in the open track, on the other hand, were allowed to use any additional resource, such as UCCA annotation in other languages, dictionaries or datasets for other tasks, provided that they make sure not to use any additional gold standard annotation over the same text used in the UCCA corpora.",
"12 In both tracks, we required that submitted systems are not trained on the development data.",
"We only held an open track for French, due to the paucity of training data.",
"The four settings and two tracks result in a total of 7 competitions.",
"Scoring.",
"The following scores an output graph G 1 = (V 1 , E 1 ) against a gold one, G 2 = (V 2 , E 2 ), over the same sequence of terminals (tokens) W .",
"For a node v in V 1 or V 2 , define yield(v) ⊆ W as is its set of terminal descendants.",
"A pair of edges (v 1 , u 1 ) ∈ E 1 and (v 2 , u 2 ) ∈ E 2 with labels (categories) 1 , 2 is matching if yield(u 1 ) = yield(u 2 ) and 1 = 2 .",
"Labeled precision and recall are defined by dividing the number of matching edges in G 1 and G 2 by |E 1 | and |E 2 |, respectively.",
"F 1 is their harmonic mean: · Precision · Recall Precision + Recall Unlabeled precision, recall and F 1 are the same, but without requiring that 1 = 2 for the edges to match.",
"We evaluate these measures for primary and remote edges separately.",
"For a more finegrained evaluation, we additionally report precision, recall and F 1 on edges of each category.",
"13 Participating Systems We received a total of eight submissions to the different tracks: MaskParse@Deskiñ 12 We are not aware of any such annotation, but include this restriction for completeness.",
"13 The official evaluation script providing both coarse-grained and fine-grained scores can be found in https://github.com/huji-nlp/ucca/blob/ master/scripts/evaluate_standard.py.",
"14 It was later discovered that CUNY-PekingU used some of the evaluation data for training in the open tracks, and they were thus disqualified from these tracks.",
"In terms of parsing approaches, the task was quite varied.",
"HLT@SUDA converted UCCA graphs to constituency trees and trained a constituency parser and a recovery mechanism of remote edges in a multi-task framework.",
"MaskParse@Deskiñ used a bidirectional GRU tagger with a masking mechanism.",
"Tüpa and XLangMo used a transition-based approach.",
"UC Davis used an encoder-decoder architecture.",
"GCN-SEM uses a BiLSTM model to predict Semantic Dependency Parsing tags, when the syntactic dependency tree is given in the input.",
"CUNY-PKU is based on an ensemble that includes different variations of the TUPA parser.",
"DAN-GNT@UIT.VNU-HCM converted syntactic dependency trees to UCCA graphs.",
"Different systems handled remote edges differently.",
"DANGNT@UIT.VNU-HCM and GCN-SEM ignored remote edges.",
"UC Davis used a different BiLSTM for remote edges.",
"HLT@SUDA marked remote edges when converting the graph to a constituency tree and trained a classification model for their recovery.",
"MaskParse@Deskiñ handles remote edges by detecting arguments that are outside of the parent's node span using a detection threshold on the output probabilities.",
"In terms of using the data, all teams but one used the UCCA XML format, two used the CoNLL-U format, which is derived by a lossy conversion process, and only one team found the other data formats helpful.",
"One of the teams (MaskParse@Deskiñ) built a new training data adapted to their model by repeating each sentence N times, N being the number of non-terminal nodes in the UCCA graphs.",
"Three of the teams adapted the baseline TUPA parser, or parts of it to form their parser, namely TüPa, CUNY-PekingU and XLangMo; HLT@SUDA used a constituency parser (Stern et al., 2017) as a component in their model; DANGNT@UIT.VNU-HCM is a rule-based system over the Stanford Parser, and the rest are newly constructed parsers.",
"All teams found it useful to use external resources beyond those provided by the Shared Task.",
"Four submissions used external embeddings, MUSE (Conneau et al., 2017) in the case of MaskParse@Deskiñ and XLangMo, ELMo (Peters et al., 2018) in the case of TüPa, 15 and BERT (Devlin et al., 2019) in the case of HLT@SUDA.",
"Other resources included additional unlabeled data (TüPa), a list of multi-word expressions (MaskParse@Deskiñ), and the Stanford parser in the case of DANGNT@UIT.VNU-HCM.",
"Only CUNY-PKU used the 20K unlabeled parallel data in English and French.",
"A common trend for many of the systems was the use of cross-lingual projection or transfer (MaskParse@Deskiñ, HLT@SUDA, TüPa, GCN-Sem, CUNY-PKU and XLangMo).",
"This was necessary for French, and was found helpful for German as well (CUNY-PKU).",
"Table 4 shows the labeled and unlabeled F1 for primary and remote edges, for each system in each track.",
"Overall F1 (All) is the F1 calculated over both primary and remote edges.",
"Full results are available online.",
"16 Figure 3 shows the fine-grained evaluation by labeled F1 per UCCA category, for each system in each track.",
"While Ground edges were uniformly 16 http://bit.ly/semeval2019task1results difficult to parse due to their sparsity in the training data, Relators were the easiest for all systems, as they are both common and predictable.",
"The Process/State distinction proved challenging, and most main relations were identified as the more common Process category.",
"The winning system in most tracks (HLT@SUDA) performed better on almost all categories.",
"Its largest advantage was on Parallel Scenes and Linkers, showing was especially successful at identifying Scene boundaries relative to the other systems, which requires a good understanding of syntax.",
"Results Discussion The HLT@SUDA system participated in all the tracks, obtaining the first place in the six English and German tracks and the second place in the French open track.",
"The system is based on the conversion of UCCA graphs into constituency trees, marking remote and discontinuous edges for recovery.",
"The classification-based recovery of the remote edges is performed simultaneously with the constituency parsing in a multi-task learning framework.",
"This work, which further connects between semantic and syntactic parsing, proposes a recovery mechanism that can be applied to other grammatical formalisms, enabling the conversion of a given formalism to another one for parsing.",
"The idea of this system is inspired by the pseudo non-projective dependency parsing approach proposed by Nivre and Nilsson (2005) .",
"The MaskParse@Deskiñ system only participated to the French open track, focusing on crosslingual parsing.",
"The system uses a semantic tagger, implemented with a bidirectional GRU and a masking mechanism to recursively extract the inner semantic structures in the graph.",
"Multilingual word embeddings are also used.",
"Using the English and German training data as well as the small French trial data for training, the parser ranked fourth in the French open track with a labeled F1 score of 65.4%, suggesting that this new model could be useful for low-resource languages.",
"The Tüpa system takes a transition-based approach, building on the TUPA transition system and oracle, but modifies its feature representations.",
"Specifically, instead of representing the parser configuration using LSTMs over the partially parsed graph, stack and buffer, they use feedforward networks with ELMo contextualized embeddings.",
"The stack and buffer are represented by the top three items on them.",
"For the partially parsed graph, they extract the rightmost and leftmost parents and children of the respective items, and represent them by the ELMo embedding of their form, the embedding of their dependency heads (for terminals, for non-terminals this is replaced with a learned embedding) and the embeddings of all terminal children.",
"Results are generally on-par with the TUPA baseline, and surpass it from the out-of-domain English setting.",
"This suggests that the TUPA architecture may be simplified, without compromising performance.",
"The UC Davis system participated only in the English closed track, where they achieved the second highest score, on par with TUPA.",
"The proposed parser has an encoder-decoder architecture, where the encoder is a simple BiLSTM encoder for each span of words.",
"The decoder iteratively and greedily traverses the sentence, and attempts to predict span boundaries.",
"The basic algorithm yields an unlabeled contiguous phrase-based tree, but additional modules predict the labels of the spans, discontiguous units (by joining together spans from the contiguous tree under a new node), and remote edges.",
"The work is inspired by Kitaev and Klein (2018) , who used similar methods for constituency parsing.",
"The GCN-SEM system uses a BiLSTM encoder, and predicts bi-lexical semantic dependencies (in the SDP format) using word, token and syntactic dependency parses.",
"The latter is incorporated into the network with a graph convolutional network (GCN).",
"The team participated in the English and German closed tracks, and were not among the highest-ranking teams.",
"However, scores on the UCCA test sets converted to the bi-lexical CoNLL-U format were rather high, implying that the lossy conversion could be much of the reason.",
"The CUNY-PKU system was based on an ensemble.",
"The ensemble included variations of TUPA parser, namely the MLP and BiLSTM models (Hershcovich et al., 2017) and the BiLSTM model with an additional MLP.",
"The system also proposes a way to aggregate the ensemble going through CKY parsing and accounting for remotes and discontinuous spans.",
"The team participated in all tracks, including additional information in the open domain, notably synthetic data based on automatically translating annotated texts.",
"Their system ranks first in the French open track.",
"The DANGNT@UIT.VNU-HCM system partic-ipated only in the English Wiki open and closed tracks.",
"The system is based on graph transformations from dependency trees into UCCA, using heuristics to create non-terminal nodes and map the dependency relations to UCCA categories.",
"The manual rules were developed based on the training and development data.",
"As the system converts trees to trees and does not add reentrancies, it does not produce remote edges.",
"While the results are not among the highest-ranking in the task, the primary labeled F1 score of 71.1% in the English open track shows that a rule-based system on top of a leading dependency parser (the Stanford parser) can obtain reasonable results for this task.",
"Conclusion The task has yielded substantial improvements to UCCA parsing in all settings.",
"Given that the best reported results were achieved with different parsing and learning approaches than the baseline model TUPA (which has been the only available parser for UCCA), the task opens a variety of paths for future improvement.",
"Cross-lingual transfer, which capitalizes on UCCA's tendency to be preserved in translation, was employed by a number of systems and has proven remarkably effective.",
"Indeed, the high scores obtained for French parsing in a low-resource setting suggest that high quality UCCA parsing can be straightforwardly extended to additional languages, with only a minimal amount of manual labor.",
"Moreover, given the conceptual similarity between the different semantic representations , it is likely the parsers developed for the shared task will directly contribute to the development of other semantic parsing technology.",
"Such a contribution is facilitated by the available conversion scripts available between UCCA and other formats."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"5",
"2",
"6",
"8",
"9"
],
"paper_header_content": [
"Overview",
"Task Definition",
"Data & Resources",
"TUPA: The Baseline Parser",
"Evaluation",
"·",
"Participating Systems",
"Discussion",
"Conclusion"
]
} | GEM-SciDuet-train-25#paper-1026#slide-7 | Evaluation | True (human-annotated) graph Automatically predicted graph for the same text
L H U H L H U A
A P A S A
graduation John moved graduation A P F A
John moved to Paris to Paris
Match primary edges by terminal yield + label.
Calculate precision, recall and F1 scores.
Repeat for remote edges. | True (human-annotated) graph Automatically predicted graph for the same text
L H U H L H U A
A P A S A
graduation John moved graduation A P F A
John moved to Paris to Paris
Match primary edges by terminal yield + label.
Calculate precision, recall and F1 scores.
Repeat for remote edges. | [] |
GEM-SciDuet-train-25#paper-1026#slide-8 | 1026 | SemEval-2019 Task 1: Cross-lingual Semantic Parsing with UCCA | We present the SemEval 2019 shared task on Universal Conceptual Cognitive Annotation (UCCA) parsing in English, German and French, and discuss the participating systems and results. UCCA is a crosslinguistically applicable framework for semantic representation, which builds on extensive typological work and supports rapid annotation. UCCA poses a challenge for existing parsing techniques, as it exhibits reentrancy (resulting in DAG structures), discontinuous structures and non-terminal nodes corresponding to complex semantic units. The shared task has yielded improvements over the state-of-the-art baseline in all languages and settings. Full results can be found in the task's website https://competitions. codalab.org/competitions/19160. 10 http://spacy.io 11 http://fasttext.cc | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159
],
"paper_content_text": [
"Overview Semantic representation is receiving growing attention in NLP in the past few years, and many proposals for semantic schemes have recently been put forth.",
"Examples include Abstract Meaning Representation (AMR; Banarescu et al., 2013) , Broad-coverage Semantic Dependencies (SDP; Oepen et al., 2016) , Universal Decompositional Semantics (UDS; White et al., 2016) , Parallel Meaning Bank (Abzianidze et al., 2017) , and Universal Conceptual Cognitive Annotation (UCCA; Abend and Rappoport, 2013) .",
"These advances in semantic representation, along with corresponding advances in semantic parsing, can potentially benefit essentially all text understanding tasks, and have already demonstrated applicability to a variety of tasks, including summarization (Liu et al., 2015; Dohare and Karnick, 2017) , paraphrase detection (Issa et al., 2018) , and semantic evaluation (using UCCA; see below).",
"In this shared task, we focus on UCCA parsing in multiple languages.",
"One of our goals is to benefit semantic parsing in languages with less annotated resources by making use of data from more resource-rich languages.",
"We refer to this approach as cross-lingual parsing, while other works (Zhang et al., 2017 (Zhang et al., , 2018 define cross-lingual parsing as the task of parsing text in one language to meaning representation in another language.",
"In addition to its potential applicative value, work on semantic parsing poses interesting algorithmic and modeling challenges, which are often different from those tackled in syntactic parsing, including reentrancy (e.g., for sharing arguments across predicates), and the modeling of the interface with lexical semantics.",
"UCCA is a cross-linguistically applicable semantic representation scheme, building on the established Basic Linguistic Theory typological framework (Dixon, 2010b (Dixon, ,a, 2012 .",
"It has demonstrated applicability to multiple languages, including English, French and German, and pilot annotation projects were conducted on a few languages more.",
"UCCA structures have been shown to be well-preserved in translation (Sulem et al., 2015) , and to support rapid annotation by nonexperts, assisted by an accessible annotation interface .",
"1 UCCA has already shown applicative value for text simplifica- Scene Elements P Process The main relation of a Scene that evolves in time (usually an action or movement).",
"S State The main relation of a Scene that does not evolve in time.",
"A Participant Scene participant (including locations, abstract entities and Scenes serving as arguments).",
"D Adverbial A secondary relation in a Scene.",
"Elements of Non-Scene Units C Center Necessary for the conceptualization of the parent unit.",
"E Elaborator A non-Scene relation applying to a single Center.",
"N Connector A non-Scene relation applying to two or more Centers, highlighting a common feature.",
"R Relator All other types of non-Scene relations: (1) Rs that relate a C to some super-ordinate relation, and (2) Rs that relate two Cs pertaining to different aspects of the parent unit.",
"Inter-Scene Relations H Parallel Scene A Scene linked to other Scenes by regular linkage (e.g., temporal, logical, purposive).",
"L Linker A relation between two or more Hs (e.g., \"when\", \"if\", \"in order to\").",
"G Ground A relation between the speech event and the uttered Scene (e.g., \"surprisingly\").",
"Other F Function Does not introduce a relation or participant.",
"Required by some structural pattern.",
"tion (Sulem et al., 2018b) , as well as for defining semantic evaluation measures for text-to-text generation tasks, including machine translation (Birch et al., 2016) , text simplification (Sulem et al., 2018a) and grammatical error correction (Choshen and Abend, 2018) .",
"The shared task defines a number of tracks, based on the different corpora and the availability of external resources (see §5).",
"It received submissions from eight research groups around the world.",
"In all settings at least one of the submitted systems improved over the state-of-the-art TUPA parser (Hershcovich et al., 2017 (Hershcovich et al., , 2018 , used as a baseline.",
"Task Definition UCCA represents the semantics of linguistic utterances as directed acyclic graphs (DAGs), where terminal (childless) nodes correspond to the text tokens, and non-terminal nodes to semantic units that participate in some super-ordinate relation.",
"Edges are labeled, indicating the role of a child in the relation the parent represents.",
"Nodes and edges belong to one of several layers, each corresponding to a \"module\" of semantic distinctions.",
"UCCA's foundational layer covers the predicate-argument structure evoked by predicates of all grammatical categories (verbal, nominal, adjectival and others), the inter-relations between them, and other major linguistic phenomena such as semantic heads and multi-word expressions.",
"It is the only layer for which annotated corpora exist at the moment, and is thus the target of this shared task.",
"The layer's basic notion is the Scene, describing a state, action, movement or some other relation that evolves in time.",
"Each Scene contains one main relation (marked as either a Process or a State), as well as one or more Participants.",
"For example, the sentence \"After graduation, John moved to Paris\" (Figure 1 ) contains two Scenes, whose main relations are \"graduation\" and \"moved\".",
"\"John\" is a Participant in both Scenes, while \"Paris\" only in the latter.",
"Further categories account for inter-Scene relations and the internal structure of complex arguments and relations (e.g., coordination and multi-word expressions).",
"Table 1 provides a concise description of the categories used by the UCCA foundational layer.",
"UCCA distinguishes primary edges, corresponding to explicit relations, from remote edges (appear dashed in Figure 1 ) that allow for a unit to participate in several super-ordinate relations.",
"Primary edges form a tree in each layer, whereas remote edges enable reentrancy, forming a DAG.",
"UCCA graphs may contain implicit units with no correspondent in the text.",
"Figure 2 shows the annotation for the sentence \"A similar technique is almost impossible to apply to other crops, such as cotton, soybeans and rice.\"",
"2 It includes a single Scene, whose main relation is \"apply\", a secondary relation \"almost impossible\", as well as two complex arguments: \"a similar technique\" and the coordinated argument \"such as cotton, soybeans, and rice.\"",
"In addition, the Scene includes an implicit argument, which represents the agent of the \"apply\" relation.",
"While parsing technology is well-established for syntactic parsing, UCCA has several formal properties that distinguish it from syntactic representations, mostly UCCA's tendency to abstract away from syntactic detail that do not affect argument structure.",
"For instance, consider the following examples where the concept of a Scene has a different rationale from the syntactic concept of a clause.",
"First, non-verbal predicates in UCCA are represented like verbal ones, such as when they appear in copula clauses or noun phrases.",
"Indeed, in Figure 1 , \"graduation\" and \"moved\" are considered separate Scenes, despite appearing in the same clause.",
"Second, in the same example, \"John\" is marked as a (remote) Participant in the graduation Scene, despite not being explicitly mentioned.",
"Third, consider the possessive construction in \"John's trip home\".",
"While in UCCA \"trip\" evokes a Scene in which \"John\" is a Participant, a syntactic scheme would analyze this phrase similarly to \"John's shoes\".",
"The differences in the challenges posed by syntactic parsing and UCCA parsing, and more generally by semantic parsing, motivate the development of targeted parsing technology to tackle it.",
"Data & Resources All UCCA corpora are freely available.",
"3 For English, we use v1.2.3 of the Wikipedia UCCA corpus (Wiki), v1.2.2 of the UCCA Twenty Thousand Leagues Under the Sea English-French parallel corpus (20K), which includes UCCA manual annotation for the first five chapters in French and English, and v1.0.1 of the UCCA German Twenty 3 https://github.com/ UniversalConceptualCognitiveAnnotation Thousand Leagues Under the Sea corpus, which includes the entire book in German.",
"For consistent annotation, we replace any Time and Quantifier labels with Adverbial and Elaborator in these data sets.",
"The resulting training, development 4 and test sets 5 are publicly available, and the splits are given in Table 2 .",
"Statistics on various structural properties are given in Table 3 .",
"The corpora were manually annotated according to v1.2 of the UCCA guidelines, 6 and reviewed by a second annotator.",
"All data was passed through automatic validation and normalization scripts.",
"7 The goal of validation is to rule out cases that are inconsistent with the UCCA annotation guidelines.",
"For example, a Scene, defined by the presence of a Process or a State, should include at least one Participant.",
"Due to the small amount of annotated data available for French, we only provided a minimal training set of 15 sentences, in addition to the development and test set.",
"Systems for French were expected to pursue semi-supervised approaches, such as cross-lingual learning or structure projection, leveraging the parallel nature of the corpus, or to rely on datasets for related formalisms, such as Universal Dependencies (Nivre et al., 2016) .",
"The full unannotated 20K Leagues corpus in English and French was released as well, in order to facilitate pursuing cross-lingual approaches.",
"Datasets were released in an XML format, including tokenized text automatically pre- processed using spaCy (see §5), and gold-standard UCCA annotation for the train and development sets.",
"8 To facilitate the use of existing NLP tools, we also released the data in SDP, AMR, CoNLL-U and plain text formats.",
"TUPA: The Baseline Parser We use the TUPA parser, the only parser for UCCA at the time the task was announced, as a baseline (Hershcovich et al., 2017 (Hershcovich et al., , 2018 .",
"TUPA is a transition-based DAG parser based on a BiLSTM-based classifier.",
"9 TUPA in itself has been found superior to a number of conversionbased parsers that use existing parsers for other formalisms to parse UCCA by constructing a twoway conversion protocol between the formalisms.",
"It can thus be regarded as a strong baseline for sys-8 https://github.com/ UniversalConceptualCognitiveAnnotation/ docs/blob/master/FORMAT.md 9 https://github.com/huji-nlp/tupa tem submissions to the shared task.",
"Evaluation Tracks.",
"Participants in the task were evaluated in four settings: In order to allow both even ground comparison between systems and using hitherto untried resources, we held both an open and a closed track for submissions in the English and German settings.",
"Closed track submissions were only allowed to use the gold-standard UCCA annotation distributed for the task in the target language, and were limited in their use of additional resources.",
"Concretely, the only additional data they were allowed to use is that used by TUPA, which consists of automatic annotations provided by spaCy: 10 POS tags, syntactic dependency relations, and named entity types and spans.",
"In addition, the closed track only allowed the use of word embeddings provided by fastText (Bojanowski et al., 2017 ) 11 for all languages.",
"Systems in the open track, on the other hand, were allowed to use any additional resource, such as UCCA annotation in other languages, dictionaries or datasets for other tasks, provided that they make sure not to use any additional gold standard annotation over the same text used in the UCCA corpora.",
"12 In both tracks, we required that submitted systems are not trained on the development data.",
"We only held an open track for French, due to the paucity of training data.",
"The four settings and two tracks result in a total of 7 competitions.",
"Scoring.",
"The following scores an output graph G 1 = (V 1 , E 1 ) against a gold one, G 2 = (V 2 , E 2 ), over the same sequence of terminals (tokens) W .",
"For a node v in V 1 or V 2 , define yield(v) ⊆ W as is its set of terminal descendants.",
"A pair of edges (v 1 , u 1 ) ∈ E 1 and (v 2 , u 2 ) ∈ E 2 with labels (categories) 1 , 2 is matching if yield(u 1 ) = yield(u 2 ) and 1 = 2 .",
"Labeled precision and recall are defined by dividing the number of matching edges in G 1 and G 2 by |E 1 | and |E 2 |, respectively.",
"F 1 is their harmonic mean: · Precision · Recall Precision + Recall Unlabeled precision, recall and F 1 are the same, but without requiring that 1 = 2 for the edges to match.",
"We evaluate these measures for primary and remote edges separately.",
"For a more finegrained evaluation, we additionally report precision, recall and F 1 on edges of each category.",
"13 Participating Systems We received a total of eight submissions to the different tracks: MaskParse@Deskiñ 12 We are not aware of any such annotation, but include this restriction for completeness.",
"13 The official evaluation script providing both coarse-grained and fine-grained scores can be found in https://github.com/huji-nlp/ucca/blob/ master/scripts/evaluate_standard.py.",
"14 It was later discovered that CUNY-PekingU used some of the evaluation data for training in the open tracks, and they were thus disqualified from these tracks.",
"In terms of parsing approaches, the task was quite varied.",
"HLT@SUDA converted UCCA graphs to constituency trees and trained a constituency parser and a recovery mechanism of remote edges in a multi-task framework.",
"MaskParse@Deskiñ used a bidirectional GRU tagger with a masking mechanism.",
"Tüpa and XLangMo used a transition-based approach.",
"UC Davis used an encoder-decoder architecture.",
"GCN-SEM uses a BiLSTM model to predict Semantic Dependency Parsing tags, when the syntactic dependency tree is given in the input.",
"CUNY-PKU is based on an ensemble that includes different variations of the TUPA parser.",
"DAN-GNT@UIT.VNU-HCM converted syntactic dependency trees to UCCA graphs.",
"Different systems handled remote edges differently.",
"DANGNT@UIT.VNU-HCM and GCN-SEM ignored remote edges.",
"UC Davis used a different BiLSTM for remote edges.",
"HLT@SUDA marked remote edges when converting the graph to a constituency tree and trained a classification model for their recovery.",
"MaskParse@Deskiñ handles remote edges by detecting arguments that are outside of the parent's node span using a detection threshold on the output probabilities.",
"In terms of using the data, all teams but one used the UCCA XML format, two used the CoNLL-U format, which is derived by a lossy conversion process, and only one team found the other data formats helpful.",
"One of the teams (MaskParse@Deskiñ) built a new training data adapted to their model by repeating each sentence N times, N being the number of non-terminal nodes in the UCCA graphs.",
"Three of the teams adapted the baseline TUPA parser, or parts of it to form their parser, namely TüPa, CUNY-PekingU and XLangMo; HLT@SUDA used a constituency parser (Stern et al., 2017) as a component in their model; DANGNT@UIT.VNU-HCM is a rule-based system over the Stanford Parser, and the rest are newly constructed parsers.",
"All teams found it useful to use external resources beyond those provided by the Shared Task.",
"Four submissions used external embeddings, MUSE (Conneau et al., 2017) in the case of MaskParse@Deskiñ and XLangMo, ELMo (Peters et al., 2018) in the case of TüPa, 15 and BERT (Devlin et al., 2019) in the case of HLT@SUDA.",
"Other resources included additional unlabeled data (TüPa), a list of multi-word expressions (MaskParse@Deskiñ), and the Stanford parser in the case of DANGNT@UIT.VNU-HCM.",
"Only CUNY-PKU used the 20K unlabeled parallel data in English and French.",
"A common trend for many of the systems was the use of cross-lingual projection or transfer (MaskParse@Deskiñ, HLT@SUDA, TüPa, GCN-Sem, CUNY-PKU and XLangMo).",
"This was necessary for French, and was found helpful for German as well (CUNY-PKU).",
"Table 4 shows the labeled and unlabeled F1 for primary and remote edges, for each system in each track.",
"Overall F1 (All) is the F1 calculated over both primary and remote edges.",
"Full results are available online.",
"16 Figure 3 shows the fine-grained evaluation by labeled F1 per UCCA category, for each system in each track.",
"While Ground edges were uniformly 16 http://bit.ly/semeval2019task1results difficult to parse due to their sparsity in the training data, Relators were the easiest for all systems, as they are both common and predictable.",
"The Process/State distinction proved challenging, and most main relations were identified as the more common Process category.",
"The winning system in most tracks (HLT@SUDA) performed better on almost all categories.",
"Its largest advantage was on Parallel Scenes and Linkers, showing was especially successful at identifying Scene boundaries relative to the other systems, which requires a good understanding of syntax.",
"Results Discussion The HLT@SUDA system participated in all the tracks, obtaining the first place in the six English and German tracks and the second place in the French open track.",
"The system is based on the conversion of UCCA graphs into constituency trees, marking remote and discontinuous edges for recovery.",
"The classification-based recovery of the remote edges is performed simultaneously with the constituency parsing in a multi-task learning framework.",
"This work, which further connects between semantic and syntactic parsing, proposes a recovery mechanism that can be applied to other grammatical formalisms, enabling the conversion of a given formalism to another one for parsing.",
"The idea of this system is inspired by the pseudo non-projective dependency parsing approach proposed by Nivre and Nilsson (2005) .",
"The MaskParse@Deskiñ system only participated to the French open track, focusing on crosslingual parsing.",
"The system uses a semantic tagger, implemented with a bidirectional GRU and a masking mechanism to recursively extract the inner semantic structures in the graph.",
"Multilingual word embeddings are also used.",
"Using the English and German training data as well as the small French trial data for training, the parser ranked fourth in the French open track with a labeled F1 score of 65.4%, suggesting that this new model could be useful for low-resource languages.",
"The Tüpa system takes a transition-based approach, building on the TUPA transition system and oracle, but modifies its feature representations.",
"Specifically, instead of representing the parser configuration using LSTMs over the partially parsed graph, stack and buffer, they use feedforward networks with ELMo contextualized embeddings.",
"The stack and buffer are represented by the top three items on them.",
"For the partially parsed graph, they extract the rightmost and leftmost parents and children of the respective items, and represent them by the ELMo embedding of their form, the embedding of their dependency heads (for terminals, for non-terminals this is replaced with a learned embedding) and the embeddings of all terminal children.",
"Results are generally on-par with the TUPA baseline, and surpass it from the out-of-domain English setting.",
"This suggests that the TUPA architecture may be simplified, without compromising performance.",
"The UC Davis system participated only in the English closed track, where they achieved the second highest score, on par with TUPA.",
"The proposed parser has an encoder-decoder architecture, where the encoder is a simple BiLSTM encoder for each span of words.",
"The decoder iteratively and greedily traverses the sentence, and attempts to predict span boundaries.",
"The basic algorithm yields an unlabeled contiguous phrase-based tree, but additional modules predict the labels of the spans, discontiguous units (by joining together spans from the contiguous tree under a new node), and remote edges.",
"The work is inspired by Kitaev and Klein (2018) , who used similar methods for constituency parsing.",
"The GCN-SEM system uses a BiLSTM encoder, and predicts bi-lexical semantic dependencies (in the SDP format) using word, token and syntactic dependency parses.",
"The latter is incorporated into the network with a graph convolutional network (GCN).",
"The team participated in the English and German closed tracks, and were not among the highest-ranking teams.",
"However, scores on the UCCA test sets converted to the bi-lexical CoNLL-U format were rather high, implying that the lossy conversion could be much of the reason.",
"The CUNY-PKU system was based on an ensemble.",
"The ensemble included variations of TUPA parser, namely the MLP and BiLSTM models (Hershcovich et al., 2017) and the BiLSTM model with an additional MLP.",
"The system also proposes a way to aggregate the ensemble going through CKY parsing and accounting for remotes and discontinuous spans.",
"The team participated in all tracks, including additional information in the open domain, notably synthetic data based on automatically translating annotated texts.",
"Their system ranks first in the French open track.",
"The DANGNT@UIT.VNU-HCM system partic-ipated only in the English Wiki open and closed tracks.",
"The system is based on graph transformations from dependency trees into UCCA, using heuristics to create non-terminal nodes and map the dependency relations to UCCA categories.",
"The manual rules were developed based on the training and development data.",
"As the system converts trees to trees and does not add reentrancies, it does not produce remote edges.",
"While the results are not among the highest-ranking in the task, the primary labeled F1 score of 71.1% in the English open track shows that a rule-based system on top of a leading dependency parser (the Stanford parser) can obtain reasonable results for this task.",
"Conclusion The task has yielded substantial improvements to UCCA parsing in all settings.",
"Given that the best reported results were achieved with different parsing and learning approaches than the baseline model TUPA (which has been the only available parser for UCCA), the task opens a variety of paths for future improvement.",
"Cross-lingual transfer, which capitalizes on UCCA's tendency to be preserved in translation, was employed by a number of systems and has proven remarkably effective.",
"Indeed, the high scores obtained for French parsing in a low-resource setting suggest that high quality UCCA parsing can be straightforwardly extended to additional languages, with only a minimal amount of manual labor.",
"Moreover, given the conceptual similarity between the different semantic representations , it is likely the parsers developed for the shared task will directly contribute to the development of other semantic parsing technology.",
"Such a contribution is facilitated by the available conversion scripts available between UCCA and other formats."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"5",
"2",
"6",
"8",
"9"
],
"paper_header_content": [
"Overview",
"Task Definition",
"Data & Resources",
"TUPA: The Baseline Parser",
"Evaluation",
"·",
"Participating Systems",
"Discussion",
"Conclusion"
]
} | GEM-SciDuet-train-25#paper-1026#slide-8 | Participating Systems | 8 groups in total:
MaskParse@Deskin Orange Labs, Aix-Marseille University
TuPa University of Tubingen
UC Davis University of California, Davis
GCN-Sem University of Wolverhampton
CUNY-PekingU City University of New York, Peking University
DANGNT@UIT.VNU-HCM University of Information Technology VNU-HCM | 8 groups in total:
MaskParse@Deskin Orange Labs, Aix-Marseille University
TuPa University of Tubingen
UC Davis University of California, Davis
GCN-Sem University of Wolverhampton
CUNY-PekingU City University of New York, Peking University
DANGNT@UIT.VNU-HCM University of Information Technology VNU-HCM | [] |
GEM-SciDuet-train-25#paper-1026#slide-9 | 1026 | SemEval-2019 Task 1: Cross-lingual Semantic Parsing with UCCA | We present the SemEval 2019 shared task on Universal Conceptual Cognitive Annotation (UCCA) parsing in English, German and French, and discuss the participating systems and results. UCCA is a crosslinguistically applicable framework for semantic representation, which builds on extensive typological work and supports rapid annotation. UCCA poses a challenge for existing parsing techniques, as it exhibits reentrancy (resulting in DAG structures), discontinuous structures and non-terminal nodes corresponding to complex semantic units. The shared task has yielded improvements over the state-of-the-art baseline in all languages and settings. Full results can be found in the task's website https://competitions. codalab.org/competitions/19160. 10 http://spacy.io 11 http://fasttext.cc | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159
],
"paper_content_text": [
"Overview Semantic representation is receiving growing attention in NLP in the past few years, and many proposals for semantic schemes have recently been put forth.",
"Examples include Abstract Meaning Representation (AMR; Banarescu et al., 2013) , Broad-coverage Semantic Dependencies (SDP; Oepen et al., 2016) , Universal Decompositional Semantics (UDS; White et al., 2016) , Parallel Meaning Bank (Abzianidze et al., 2017) , and Universal Conceptual Cognitive Annotation (UCCA; Abend and Rappoport, 2013) .",
"These advances in semantic representation, along with corresponding advances in semantic parsing, can potentially benefit essentially all text understanding tasks, and have already demonstrated applicability to a variety of tasks, including summarization (Liu et al., 2015; Dohare and Karnick, 2017) , paraphrase detection (Issa et al., 2018) , and semantic evaluation (using UCCA; see below).",
"In this shared task, we focus on UCCA parsing in multiple languages.",
"One of our goals is to benefit semantic parsing in languages with less annotated resources by making use of data from more resource-rich languages.",
"We refer to this approach as cross-lingual parsing, while other works (Zhang et al., 2017 (Zhang et al., , 2018 define cross-lingual parsing as the task of parsing text in one language to meaning representation in another language.",
"In addition to its potential applicative value, work on semantic parsing poses interesting algorithmic and modeling challenges, which are often different from those tackled in syntactic parsing, including reentrancy (e.g., for sharing arguments across predicates), and the modeling of the interface with lexical semantics.",
"UCCA is a cross-linguistically applicable semantic representation scheme, building on the established Basic Linguistic Theory typological framework (Dixon, 2010b (Dixon, ,a, 2012 .",
"It has demonstrated applicability to multiple languages, including English, French and German, and pilot annotation projects were conducted on a few languages more.",
"UCCA structures have been shown to be well-preserved in translation (Sulem et al., 2015) , and to support rapid annotation by nonexperts, assisted by an accessible annotation interface .",
"1 UCCA has already shown applicative value for text simplifica- Scene Elements P Process The main relation of a Scene that evolves in time (usually an action or movement).",
"S State The main relation of a Scene that does not evolve in time.",
"A Participant Scene participant (including locations, abstract entities and Scenes serving as arguments).",
"D Adverbial A secondary relation in a Scene.",
"Elements of Non-Scene Units C Center Necessary for the conceptualization of the parent unit.",
"E Elaborator A non-Scene relation applying to a single Center.",
"N Connector A non-Scene relation applying to two or more Centers, highlighting a common feature.",
"R Relator All other types of non-Scene relations: (1) Rs that relate a C to some super-ordinate relation, and (2) Rs that relate two Cs pertaining to different aspects of the parent unit.",
"Inter-Scene Relations H Parallel Scene A Scene linked to other Scenes by regular linkage (e.g., temporal, logical, purposive).",
"L Linker A relation between two or more Hs (e.g., \"when\", \"if\", \"in order to\").",
"G Ground A relation between the speech event and the uttered Scene (e.g., \"surprisingly\").",
"Other F Function Does not introduce a relation or participant.",
"Required by some structural pattern.",
"tion (Sulem et al., 2018b) , as well as for defining semantic evaluation measures for text-to-text generation tasks, including machine translation (Birch et al., 2016) , text simplification (Sulem et al., 2018a) and grammatical error correction (Choshen and Abend, 2018) .",
"The shared task defines a number of tracks, based on the different corpora and the availability of external resources (see §5).",
"It received submissions from eight research groups around the world.",
"In all settings at least one of the submitted systems improved over the state-of-the-art TUPA parser (Hershcovich et al., 2017 (Hershcovich et al., , 2018 , used as a baseline.",
"Task Definition UCCA represents the semantics of linguistic utterances as directed acyclic graphs (DAGs), where terminal (childless) nodes correspond to the text tokens, and non-terminal nodes to semantic units that participate in some super-ordinate relation.",
"Edges are labeled, indicating the role of a child in the relation the parent represents.",
"Nodes and edges belong to one of several layers, each corresponding to a \"module\" of semantic distinctions.",
"UCCA's foundational layer covers the predicate-argument structure evoked by predicates of all grammatical categories (verbal, nominal, adjectival and others), the inter-relations between them, and other major linguistic phenomena such as semantic heads and multi-word expressions.",
"It is the only layer for which annotated corpora exist at the moment, and is thus the target of this shared task.",
"The layer's basic notion is the Scene, describing a state, action, movement or some other relation that evolves in time.",
"Each Scene contains one main relation (marked as either a Process or a State), as well as one or more Participants.",
"For example, the sentence \"After graduation, John moved to Paris\" (Figure 1 ) contains two Scenes, whose main relations are \"graduation\" and \"moved\".",
"\"John\" is a Participant in both Scenes, while \"Paris\" only in the latter.",
"Further categories account for inter-Scene relations and the internal structure of complex arguments and relations (e.g., coordination and multi-word expressions).",
"Table 1 provides a concise description of the categories used by the UCCA foundational layer.",
"UCCA distinguishes primary edges, corresponding to explicit relations, from remote edges (appear dashed in Figure 1 ) that allow for a unit to participate in several super-ordinate relations.",
"Primary edges form a tree in each layer, whereas remote edges enable reentrancy, forming a DAG.",
"UCCA graphs may contain implicit units with no correspondent in the text.",
"Figure 2 shows the annotation for the sentence \"A similar technique is almost impossible to apply to other crops, such as cotton, soybeans and rice.\"",
"2 It includes a single Scene, whose main relation is \"apply\", a secondary relation \"almost impossible\", as well as two complex arguments: \"a similar technique\" and the coordinated argument \"such as cotton, soybeans, and rice.\"",
"In addition, the Scene includes an implicit argument, which represents the agent of the \"apply\" relation.",
"While parsing technology is well-established for syntactic parsing, UCCA has several formal properties that distinguish it from syntactic representations, mostly UCCA's tendency to abstract away from syntactic detail that do not affect argument structure.",
"For instance, consider the following examples where the concept of a Scene has a different rationale from the syntactic concept of a clause.",
"First, non-verbal predicates in UCCA are represented like verbal ones, such as when they appear in copula clauses or noun phrases.",
"Indeed, in Figure 1 , \"graduation\" and \"moved\" are considered separate Scenes, despite appearing in the same clause.",
"Second, in the same example, \"John\" is marked as a (remote) Participant in the graduation Scene, despite not being explicitly mentioned.",
"Third, consider the possessive construction in \"John's trip home\".",
"While in UCCA \"trip\" evokes a Scene in which \"John\" is a Participant, a syntactic scheme would analyze this phrase similarly to \"John's shoes\".",
"The differences in the challenges posed by syntactic parsing and UCCA parsing, and more generally by semantic parsing, motivate the development of targeted parsing technology to tackle it.",
"Data & Resources All UCCA corpora are freely available.",
"3 For English, we use v1.2.3 of the Wikipedia UCCA corpus (Wiki), v1.2.2 of the UCCA Twenty Thousand Leagues Under the Sea English-French parallel corpus (20K), which includes UCCA manual annotation for the first five chapters in French and English, and v1.0.1 of the UCCA German Twenty 3 https://github.com/ UniversalConceptualCognitiveAnnotation Thousand Leagues Under the Sea corpus, which includes the entire book in German.",
"For consistent annotation, we replace any Time and Quantifier labels with Adverbial and Elaborator in these data sets.",
"The resulting training, development 4 and test sets 5 are publicly available, and the splits are given in Table 2 .",
"Statistics on various structural properties are given in Table 3 .",
"The corpora were manually annotated according to v1.2 of the UCCA guidelines, 6 and reviewed by a second annotator.",
"All data was passed through automatic validation and normalization scripts.",
"7 The goal of validation is to rule out cases that are inconsistent with the UCCA annotation guidelines.",
"For example, a Scene, defined by the presence of a Process or a State, should include at least one Participant.",
"Due to the small amount of annotated data available for French, we only provided a minimal training set of 15 sentences, in addition to the development and test set.",
"Systems for French were expected to pursue semi-supervised approaches, such as cross-lingual learning or structure projection, leveraging the parallel nature of the corpus, or to rely on datasets for related formalisms, such as Universal Dependencies (Nivre et al., 2016) .",
"The full unannotated 20K Leagues corpus in English and French was released as well, in order to facilitate pursuing cross-lingual approaches.",
"Datasets were released in an XML format, including tokenized text automatically pre- processed using spaCy (see §5), and gold-standard UCCA annotation for the train and development sets.",
"8 To facilitate the use of existing NLP tools, we also released the data in SDP, AMR, CoNLL-U and plain text formats.",
"TUPA: The Baseline Parser We use the TUPA parser, the only parser for UCCA at the time the task was announced, as a baseline (Hershcovich et al., 2017 (Hershcovich et al., , 2018 .",
"TUPA is a transition-based DAG parser based on a BiLSTM-based classifier.",
"9 TUPA in itself has been found superior to a number of conversionbased parsers that use existing parsers for other formalisms to parse UCCA by constructing a twoway conversion protocol between the formalisms.",
"It can thus be regarded as a strong baseline for sys-8 https://github.com/ UniversalConceptualCognitiveAnnotation/ docs/blob/master/FORMAT.md 9 https://github.com/huji-nlp/tupa tem submissions to the shared task.",
"Evaluation Tracks.",
"Participants in the task were evaluated in four settings: In order to allow both even ground comparison between systems and using hitherto untried resources, we held both an open and a closed track for submissions in the English and German settings.",
"Closed track submissions were only allowed to use the gold-standard UCCA annotation distributed for the task in the target language, and were limited in their use of additional resources.",
"Concretely, the only additional data they were allowed to use is that used by TUPA, which consists of automatic annotations provided by spaCy: 10 POS tags, syntactic dependency relations, and named entity types and spans.",
"In addition, the closed track only allowed the use of word embeddings provided by fastText (Bojanowski et al., 2017 ) 11 for all languages.",
"Systems in the open track, on the other hand, were allowed to use any additional resource, such as UCCA annotation in other languages, dictionaries or datasets for other tasks, provided that they make sure not to use any additional gold standard annotation over the same text used in the UCCA corpora.",
"12 In both tracks, we required that submitted systems are not trained on the development data.",
"We only held an open track for French, due to the paucity of training data.",
"The four settings and two tracks result in a total of 7 competitions.",
"Scoring.",
"The following scores an output graph G 1 = (V 1 , E 1 ) against a gold one, G 2 = (V 2 , E 2 ), over the same sequence of terminals (tokens) W .",
"For a node v in V 1 or V 2 , define yield(v) ⊆ W as is its set of terminal descendants.",
"A pair of edges (v 1 , u 1 ) ∈ E 1 and (v 2 , u 2 ) ∈ E 2 with labels (categories) 1 , 2 is matching if yield(u 1 ) = yield(u 2 ) and 1 = 2 .",
"Labeled precision and recall are defined by dividing the number of matching edges in G 1 and G 2 by |E 1 | and |E 2 |, respectively.",
"F 1 is their harmonic mean: · Precision · Recall Precision + Recall Unlabeled precision, recall and F 1 are the same, but without requiring that 1 = 2 for the edges to match.",
"We evaluate these measures for primary and remote edges separately.",
"For a more finegrained evaluation, we additionally report precision, recall and F 1 on edges of each category.",
"13 Participating Systems We received a total of eight submissions to the different tracks: MaskParse@Deskiñ 12 We are not aware of any such annotation, but include this restriction for completeness.",
"13 The official evaluation script providing both coarse-grained and fine-grained scores can be found in https://github.com/huji-nlp/ucca/blob/ master/scripts/evaluate_standard.py.",
"14 It was later discovered that CUNY-PekingU used some of the evaluation data for training in the open tracks, and they were thus disqualified from these tracks.",
"In terms of parsing approaches, the task was quite varied.",
"HLT@SUDA converted UCCA graphs to constituency trees and trained a constituency parser and a recovery mechanism of remote edges in a multi-task framework.",
"MaskParse@Deskiñ used a bidirectional GRU tagger with a masking mechanism.",
"Tüpa and XLangMo used a transition-based approach.",
"UC Davis used an encoder-decoder architecture.",
"GCN-SEM uses a BiLSTM model to predict Semantic Dependency Parsing tags, when the syntactic dependency tree is given in the input.",
"CUNY-PKU is based on an ensemble that includes different variations of the TUPA parser.",
"DAN-GNT@UIT.VNU-HCM converted syntactic dependency trees to UCCA graphs.",
"Different systems handled remote edges differently.",
"DANGNT@UIT.VNU-HCM and GCN-SEM ignored remote edges.",
"UC Davis used a different BiLSTM for remote edges.",
"HLT@SUDA marked remote edges when converting the graph to a constituency tree and trained a classification model for their recovery.",
"MaskParse@Deskiñ handles remote edges by detecting arguments that are outside of the parent's node span using a detection threshold on the output probabilities.",
"In terms of using the data, all teams but one used the UCCA XML format, two used the CoNLL-U format, which is derived by a lossy conversion process, and only one team found the other data formats helpful.",
"One of the teams (MaskParse@Deskiñ) built a new training data adapted to their model by repeating each sentence N times, N being the number of non-terminal nodes in the UCCA graphs.",
"Three of the teams adapted the baseline TUPA parser, or parts of it to form their parser, namely TüPa, CUNY-PekingU and XLangMo; HLT@SUDA used a constituency parser (Stern et al., 2017) as a component in their model; DANGNT@UIT.VNU-HCM is a rule-based system over the Stanford Parser, and the rest are newly constructed parsers.",
"All teams found it useful to use external resources beyond those provided by the Shared Task.",
"Four submissions used external embeddings, MUSE (Conneau et al., 2017) in the case of MaskParse@Deskiñ and XLangMo, ELMo (Peters et al., 2018) in the case of TüPa, 15 and BERT (Devlin et al., 2019) in the case of HLT@SUDA.",
"Other resources included additional unlabeled data (TüPa), a list of multi-word expressions (MaskParse@Deskiñ), and the Stanford parser in the case of DANGNT@UIT.VNU-HCM.",
"Only CUNY-PKU used the 20K unlabeled parallel data in English and French.",
"A common trend for many of the systems was the use of cross-lingual projection or transfer (MaskParse@Deskiñ, HLT@SUDA, TüPa, GCN-Sem, CUNY-PKU and XLangMo).",
"This was necessary for French, and was found helpful for German as well (CUNY-PKU).",
"Table 4 shows the labeled and unlabeled F1 for primary and remote edges, for each system in each track.",
"Overall F1 (All) is the F1 calculated over both primary and remote edges.",
"Full results are available online.",
"16 Figure 3 shows the fine-grained evaluation by labeled F1 per UCCA category, for each system in each track.",
"While Ground edges were uniformly 16 http://bit.ly/semeval2019task1results difficult to parse due to their sparsity in the training data, Relators were the easiest for all systems, as they are both common and predictable.",
"The Process/State distinction proved challenging, and most main relations were identified as the more common Process category.",
"The winning system in most tracks (HLT@SUDA) performed better on almost all categories.",
"Its largest advantage was on Parallel Scenes and Linkers, showing was especially successful at identifying Scene boundaries relative to the other systems, which requires a good understanding of syntax.",
"Results Discussion The HLT@SUDA system participated in all the tracks, obtaining the first place in the six English and German tracks and the second place in the French open track.",
"The system is based on the conversion of UCCA graphs into constituency trees, marking remote and discontinuous edges for recovery.",
"The classification-based recovery of the remote edges is performed simultaneously with the constituency parsing in a multi-task learning framework.",
"This work, which further connects between semantic and syntactic parsing, proposes a recovery mechanism that can be applied to other grammatical formalisms, enabling the conversion of a given formalism to another one for parsing.",
"The idea of this system is inspired by the pseudo non-projective dependency parsing approach proposed by Nivre and Nilsson (2005) .",
"The MaskParse@Deskiñ system only participated to the French open track, focusing on crosslingual parsing.",
"The system uses a semantic tagger, implemented with a bidirectional GRU and a masking mechanism to recursively extract the inner semantic structures in the graph.",
"Multilingual word embeddings are also used.",
"Using the English and German training data as well as the small French trial data for training, the parser ranked fourth in the French open track with a labeled F1 score of 65.4%, suggesting that this new model could be useful for low-resource languages.",
"The Tüpa system takes a transition-based approach, building on the TUPA transition system and oracle, but modifies its feature representations.",
"Specifically, instead of representing the parser configuration using LSTMs over the partially parsed graph, stack and buffer, they use feedforward networks with ELMo contextualized embeddings.",
"The stack and buffer are represented by the top three items on them.",
"For the partially parsed graph, they extract the rightmost and leftmost parents and children of the respective items, and represent them by the ELMo embedding of their form, the embedding of their dependency heads (for terminals, for non-terminals this is replaced with a learned embedding) and the embeddings of all terminal children.",
"Results are generally on-par with the TUPA baseline, and surpass it from the out-of-domain English setting.",
"This suggests that the TUPA architecture may be simplified, without compromising performance.",
"The UC Davis system participated only in the English closed track, where they achieved the second highest score, on par with TUPA.",
"The proposed parser has an encoder-decoder architecture, where the encoder is a simple BiLSTM encoder for each span of words.",
"The decoder iteratively and greedily traverses the sentence, and attempts to predict span boundaries.",
"The basic algorithm yields an unlabeled contiguous phrase-based tree, but additional modules predict the labels of the spans, discontiguous units (by joining together spans from the contiguous tree under a new node), and remote edges.",
"The work is inspired by Kitaev and Klein (2018) , who used similar methods for constituency parsing.",
"The GCN-SEM system uses a BiLSTM encoder, and predicts bi-lexical semantic dependencies (in the SDP format) using word, token and syntactic dependency parses.",
"The latter is incorporated into the network with a graph convolutional network (GCN).",
"The team participated in the English and German closed tracks, and were not among the highest-ranking teams.",
"However, scores on the UCCA test sets converted to the bi-lexical CoNLL-U format were rather high, implying that the lossy conversion could be much of the reason.",
"The CUNY-PKU system was based on an ensemble.",
"The ensemble included variations of TUPA parser, namely the MLP and BiLSTM models (Hershcovich et al., 2017) and the BiLSTM model with an additional MLP.",
"The system also proposes a way to aggregate the ensemble going through CKY parsing and accounting for remotes and discontinuous spans.",
"The team participated in all tracks, including additional information in the open domain, notably synthetic data based on automatically translating annotated texts.",
"Their system ranks first in the French open track.",
"The DANGNT@UIT.VNU-HCM system partic-ipated only in the English Wiki open and closed tracks.",
"The system is based on graph transformations from dependency trees into UCCA, using heuristics to create non-terminal nodes and map the dependency relations to UCCA categories.",
"The manual rules were developed based on the training and development data.",
"As the system converts trees to trees and does not add reentrancies, it does not produce remote edges.",
"While the results are not among the highest-ranking in the task, the primary labeled F1 score of 71.1% in the English open track shows that a rule-based system on top of a leading dependency parser (the Stanford parser) can obtain reasonable results for this task.",
"Conclusion The task has yielded substantial improvements to UCCA parsing in all settings.",
"Given that the best reported results were achieved with different parsing and learning approaches than the baseline model TUPA (which has been the only available parser for UCCA), the task opens a variety of paths for future improvement.",
"Cross-lingual transfer, which capitalizes on UCCA's tendency to be preserved in translation, was employed by a number of systems and has proven remarkably effective.",
"Indeed, the high scores obtained for French parsing in a low-resource setting suggest that high quality UCCA parsing can be straightforwardly extended to additional languages, with only a minimal amount of manual labor.",
"Moreover, given the conceptual similarity between the different semantic representations , it is likely the parsers developed for the shared task will directly contribute to the development of other semantic parsing technology.",
"Such a contribution is facilitated by the available conversion scripts available between UCCA and other formats."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"5",
"2",
"6",
"8",
"9"
],
"paper_header_content": [
"Overview",
"Task Definition",
"Data & Resources",
"TUPA: The Baseline Parser",
"Evaluation",
"·",
"Participating Systems",
"Discussion",
"Conclusion"
]
} | GEM-SciDuet-train-25#paper-1026#slide-9 | Leaderboard | Track 1st place 2nd place 3rd place baseline
English-Wiki closed HLT@SUDA baseline Davis
English-Wiki open HLT@SUDA CUNY-PekingU 0.800 TuPa
English-20K closed HLT@SUDA baseline CUNY-PekingU 0.669
English-20K open HLT@SUDA CUNY-PekingU 0.739 TuPa
German-20K closed HLT@SUDA CUNY-PekingU 0.797 baseline
German-20K open HLT@SUDA CUNY-PekingU 0.841 baseline
French-20K open CUNY-PekingU 0.796 HLT@SUDA XLangMo | Track 1st place 2nd place 3rd place baseline
English-Wiki closed HLT@SUDA baseline Davis
English-Wiki open HLT@SUDA CUNY-PekingU 0.800 TuPa
English-20K closed HLT@SUDA baseline CUNY-PekingU 0.669
English-20K open HLT@SUDA CUNY-PekingU 0.739 TuPa
German-20K closed HLT@SUDA CUNY-PekingU 0.797 baseline
German-20K open HLT@SUDA CUNY-PekingU 0.841 baseline
French-20K open CUNY-PekingU 0.796 HLT@SUDA XLangMo | [] |
GEM-SciDuet-train-25#paper-1026#slide-10 | 1026 | SemEval-2019 Task 1: Cross-lingual Semantic Parsing with UCCA | We present the SemEval 2019 shared task on Universal Conceptual Cognitive Annotation (UCCA) parsing in English, German and French, and discuss the participating systems and results. UCCA is a crosslinguistically applicable framework for semantic representation, which builds on extensive typological work and supports rapid annotation. UCCA poses a challenge for existing parsing techniques, as it exhibits reentrancy (resulting in DAG structures), discontinuous structures and non-terminal nodes corresponding to complex semantic units. The shared task has yielded improvements over the state-of-the-art baseline in all languages and settings. Full results can be found in the task's website https://competitions. codalab.org/competitions/19160. 10 http://spacy.io 11 http://fasttext.cc | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159
],
"paper_content_text": [
"Overview Semantic representation is receiving growing attention in NLP in the past few years, and many proposals for semantic schemes have recently been put forth.",
"Examples include Abstract Meaning Representation (AMR; Banarescu et al., 2013) , Broad-coverage Semantic Dependencies (SDP; Oepen et al., 2016) , Universal Decompositional Semantics (UDS; White et al., 2016) , Parallel Meaning Bank (Abzianidze et al., 2017) , and Universal Conceptual Cognitive Annotation (UCCA; Abend and Rappoport, 2013) .",
"These advances in semantic representation, along with corresponding advances in semantic parsing, can potentially benefit essentially all text understanding tasks, and have already demonstrated applicability to a variety of tasks, including summarization (Liu et al., 2015; Dohare and Karnick, 2017) , paraphrase detection (Issa et al., 2018) , and semantic evaluation (using UCCA; see below).",
"In this shared task, we focus on UCCA parsing in multiple languages.",
"One of our goals is to benefit semantic parsing in languages with less annotated resources by making use of data from more resource-rich languages.",
"We refer to this approach as cross-lingual parsing, while other works (Zhang et al., 2017 (Zhang et al., , 2018 define cross-lingual parsing as the task of parsing text in one language to meaning representation in another language.",
"In addition to its potential applicative value, work on semantic parsing poses interesting algorithmic and modeling challenges, which are often different from those tackled in syntactic parsing, including reentrancy (e.g., for sharing arguments across predicates), and the modeling of the interface with lexical semantics.",
"UCCA is a cross-linguistically applicable semantic representation scheme, building on the established Basic Linguistic Theory typological framework (Dixon, 2010b (Dixon, ,a, 2012 .",
"It has demonstrated applicability to multiple languages, including English, French and German, and pilot annotation projects were conducted on a few languages more.",
"UCCA structures have been shown to be well-preserved in translation (Sulem et al., 2015) , and to support rapid annotation by nonexperts, assisted by an accessible annotation interface .",
"1 UCCA has already shown applicative value for text simplifica- Scene Elements P Process The main relation of a Scene that evolves in time (usually an action or movement).",
"S State The main relation of a Scene that does not evolve in time.",
"A Participant Scene participant (including locations, abstract entities and Scenes serving as arguments).",
"D Adverbial A secondary relation in a Scene.",
"Elements of Non-Scene Units C Center Necessary for the conceptualization of the parent unit.",
"E Elaborator A non-Scene relation applying to a single Center.",
"N Connector A non-Scene relation applying to two or more Centers, highlighting a common feature.",
"R Relator All other types of non-Scene relations: (1) Rs that relate a C to some super-ordinate relation, and (2) Rs that relate two Cs pertaining to different aspects of the parent unit.",
"Inter-Scene Relations H Parallel Scene A Scene linked to other Scenes by regular linkage (e.g., temporal, logical, purposive).",
"L Linker A relation between two or more Hs (e.g., \"when\", \"if\", \"in order to\").",
"G Ground A relation between the speech event and the uttered Scene (e.g., \"surprisingly\").",
"Other F Function Does not introduce a relation or participant.",
"Required by some structural pattern.",
"tion (Sulem et al., 2018b) , as well as for defining semantic evaluation measures for text-to-text generation tasks, including machine translation (Birch et al., 2016) , text simplification (Sulem et al., 2018a) and grammatical error correction (Choshen and Abend, 2018) .",
"The shared task defines a number of tracks, based on the different corpora and the availability of external resources (see §5).",
"It received submissions from eight research groups around the world.",
"In all settings at least one of the submitted systems improved over the state-of-the-art TUPA parser (Hershcovich et al., 2017 (Hershcovich et al., , 2018 , used as a baseline.",
"Task Definition UCCA represents the semantics of linguistic utterances as directed acyclic graphs (DAGs), where terminal (childless) nodes correspond to the text tokens, and non-terminal nodes to semantic units that participate in some super-ordinate relation.",
"Edges are labeled, indicating the role of a child in the relation the parent represents.",
"Nodes and edges belong to one of several layers, each corresponding to a \"module\" of semantic distinctions.",
"UCCA's foundational layer covers the predicate-argument structure evoked by predicates of all grammatical categories (verbal, nominal, adjectival and others), the inter-relations between them, and other major linguistic phenomena such as semantic heads and multi-word expressions.",
"It is the only layer for which annotated corpora exist at the moment, and is thus the target of this shared task.",
"The layer's basic notion is the Scene, describing a state, action, movement or some other relation that evolves in time.",
"Each Scene contains one main relation (marked as either a Process or a State), as well as one or more Participants.",
"For example, the sentence \"After graduation, John moved to Paris\" (Figure 1 ) contains two Scenes, whose main relations are \"graduation\" and \"moved\".",
"\"John\" is a Participant in both Scenes, while \"Paris\" only in the latter.",
"Further categories account for inter-Scene relations and the internal structure of complex arguments and relations (e.g., coordination and multi-word expressions).",
"Table 1 provides a concise description of the categories used by the UCCA foundational layer.",
"UCCA distinguishes primary edges, corresponding to explicit relations, from remote edges (appear dashed in Figure 1 ) that allow for a unit to participate in several super-ordinate relations.",
"Primary edges form a tree in each layer, whereas remote edges enable reentrancy, forming a DAG.",
"UCCA graphs may contain implicit units with no correspondent in the text.",
"Figure 2 shows the annotation for the sentence \"A similar technique is almost impossible to apply to other crops, such as cotton, soybeans and rice.\"",
"2 It includes a single Scene, whose main relation is \"apply\", a secondary relation \"almost impossible\", as well as two complex arguments: \"a similar technique\" and the coordinated argument \"such as cotton, soybeans, and rice.\"",
"In addition, the Scene includes an implicit argument, which represents the agent of the \"apply\" relation.",
"While parsing technology is well-established for syntactic parsing, UCCA has several formal properties that distinguish it from syntactic representations, mostly UCCA's tendency to abstract away from syntactic detail that do not affect argument structure.",
"For instance, consider the following examples where the concept of a Scene has a different rationale from the syntactic concept of a clause.",
"First, non-verbal predicates in UCCA are represented like verbal ones, such as when they appear in copula clauses or noun phrases.",
"Indeed, in Figure 1 , \"graduation\" and \"moved\" are considered separate Scenes, despite appearing in the same clause.",
"Second, in the same example, \"John\" is marked as a (remote) Participant in the graduation Scene, despite not being explicitly mentioned.",
"Third, consider the possessive construction in \"John's trip home\".",
"While in UCCA \"trip\" evokes a Scene in which \"John\" is a Participant, a syntactic scheme would analyze this phrase similarly to \"John's shoes\".",
"The differences in the challenges posed by syntactic parsing and UCCA parsing, and more generally by semantic parsing, motivate the development of targeted parsing technology to tackle it.",
"Data & Resources All UCCA corpora are freely available.",
"3 For English, we use v1.2.3 of the Wikipedia UCCA corpus (Wiki), v1.2.2 of the UCCA Twenty Thousand Leagues Under the Sea English-French parallel corpus (20K), which includes UCCA manual annotation for the first five chapters in French and English, and v1.0.1 of the UCCA German Twenty 3 https://github.com/ UniversalConceptualCognitiveAnnotation Thousand Leagues Under the Sea corpus, which includes the entire book in German.",
"For consistent annotation, we replace any Time and Quantifier labels with Adverbial and Elaborator in these data sets.",
"The resulting training, development 4 and test sets 5 are publicly available, and the splits are given in Table 2 .",
"Statistics on various structural properties are given in Table 3 .",
"The corpora were manually annotated according to v1.2 of the UCCA guidelines, 6 and reviewed by a second annotator.",
"All data was passed through automatic validation and normalization scripts.",
"7 The goal of validation is to rule out cases that are inconsistent with the UCCA annotation guidelines.",
"For example, a Scene, defined by the presence of a Process or a State, should include at least one Participant.",
"Due to the small amount of annotated data available for French, we only provided a minimal training set of 15 sentences, in addition to the development and test set.",
"Systems for French were expected to pursue semi-supervised approaches, such as cross-lingual learning or structure projection, leveraging the parallel nature of the corpus, or to rely on datasets for related formalisms, such as Universal Dependencies (Nivre et al., 2016) .",
"The full unannotated 20K Leagues corpus in English and French was released as well, in order to facilitate pursuing cross-lingual approaches.",
"Datasets were released in an XML format, including tokenized text automatically pre- processed using spaCy (see §5), and gold-standard UCCA annotation for the train and development sets.",
"8 To facilitate the use of existing NLP tools, we also released the data in SDP, AMR, CoNLL-U and plain text formats.",
"TUPA: The Baseline Parser We use the TUPA parser, the only parser for UCCA at the time the task was announced, as a baseline (Hershcovich et al., 2017 (Hershcovich et al., , 2018 .",
"TUPA is a transition-based DAG parser based on a BiLSTM-based classifier.",
"9 TUPA in itself has been found superior to a number of conversionbased parsers that use existing parsers for other formalisms to parse UCCA by constructing a twoway conversion protocol between the formalisms.",
"It can thus be regarded as a strong baseline for sys-8 https://github.com/ UniversalConceptualCognitiveAnnotation/ docs/blob/master/FORMAT.md 9 https://github.com/huji-nlp/tupa tem submissions to the shared task.",
"Evaluation Tracks.",
"Participants in the task were evaluated in four settings: In order to allow both even ground comparison between systems and using hitherto untried resources, we held both an open and a closed track for submissions in the English and German settings.",
"Closed track submissions were only allowed to use the gold-standard UCCA annotation distributed for the task in the target language, and were limited in their use of additional resources.",
"Concretely, the only additional data they were allowed to use is that used by TUPA, which consists of automatic annotations provided by spaCy: 10 POS tags, syntactic dependency relations, and named entity types and spans.",
"In addition, the closed track only allowed the use of word embeddings provided by fastText (Bojanowski et al., 2017 ) 11 for all languages.",
"Systems in the open track, on the other hand, were allowed to use any additional resource, such as UCCA annotation in other languages, dictionaries or datasets for other tasks, provided that they make sure not to use any additional gold standard annotation over the same text used in the UCCA corpora.",
"12 In both tracks, we required that submitted systems are not trained on the development data.",
"We only held an open track for French, due to the paucity of training data.",
"The four settings and two tracks result in a total of 7 competitions.",
"Scoring.",
"The following scores an output graph G 1 = (V 1 , E 1 ) against a gold one, G 2 = (V 2 , E 2 ), over the same sequence of terminals (tokens) W .",
"For a node v in V 1 or V 2 , define yield(v) ⊆ W as is its set of terminal descendants.",
"A pair of edges (v 1 , u 1 ) ∈ E 1 and (v 2 , u 2 ) ∈ E 2 with labels (categories) 1 , 2 is matching if yield(u 1 ) = yield(u 2 ) and 1 = 2 .",
"Labeled precision and recall are defined by dividing the number of matching edges in G 1 and G 2 by |E 1 | and |E 2 |, respectively.",
"F 1 is their harmonic mean: · Precision · Recall Precision + Recall Unlabeled precision, recall and F 1 are the same, but without requiring that 1 = 2 for the edges to match.",
"We evaluate these measures for primary and remote edges separately.",
"For a more finegrained evaluation, we additionally report precision, recall and F 1 on edges of each category.",
"13 Participating Systems We received a total of eight submissions to the different tracks: MaskParse@Deskiñ 12 We are not aware of any such annotation, but include this restriction for completeness.",
"13 The official evaluation script providing both coarse-grained and fine-grained scores can be found in https://github.com/huji-nlp/ucca/blob/ master/scripts/evaluate_standard.py.",
"14 It was later discovered that CUNY-PekingU used some of the evaluation data for training in the open tracks, and they were thus disqualified from these tracks.",
"In terms of parsing approaches, the task was quite varied.",
"HLT@SUDA converted UCCA graphs to constituency trees and trained a constituency parser and a recovery mechanism of remote edges in a multi-task framework.",
"MaskParse@Deskiñ used a bidirectional GRU tagger with a masking mechanism.",
"Tüpa and XLangMo used a transition-based approach.",
"UC Davis used an encoder-decoder architecture.",
"GCN-SEM uses a BiLSTM model to predict Semantic Dependency Parsing tags, when the syntactic dependency tree is given in the input.",
"CUNY-PKU is based on an ensemble that includes different variations of the TUPA parser.",
"DAN-GNT@UIT.VNU-HCM converted syntactic dependency trees to UCCA graphs.",
"Different systems handled remote edges differently.",
"DANGNT@UIT.VNU-HCM and GCN-SEM ignored remote edges.",
"UC Davis used a different BiLSTM for remote edges.",
"HLT@SUDA marked remote edges when converting the graph to a constituency tree and trained a classification model for their recovery.",
"MaskParse@Deskiñ handles remote edges by detecting arguments that are outside of the parent's node span using a detection threshold on the output probabilities.",
"In terms of using the data, all teams but one used the UCCA XML format, two used the CoNLL-U format, which is derived by a lossy conversion process, and only one team found the other data formats helpful.",
"One of the teams (MaskParse@Deskiñ) built a new training data adapted to their model by repeating each sentence N times, N being the number of non-terminal nodes in the UCCA graphs.",
"Three of the teams adapted the baseline TUPA parser, or parts of it to form their parser, namely TüPa, CUNY-PekingU and XLangMo; HLT@SUDA used a constituency parser (Stern et al., 2017) as a component in their model; DANGNT@UIT.VNU-HCM is a rule-based system over the Stanford Parser, and the rest are newly constructed parsers.",
"All teams found it useful to use external resources beyond those provided by the Shared Task.",
"Four submissions used external embeddings, MUSE (Conneau et al., 2017) in the case of MaskParse@Deskiñ and XLangMo, ELMo (Peters et al., 2018) in the case of TüPa, 15 and BERT (Devlin et al., 2019) in the case of HLT@SUDA.",
"Other resources included additional unlabeled data (TüPa), a list of multi-word expressions (MaskParse@Deskiñ), and the Stanford parser in the case of DANGNT@UIT.VNU-HCM.",
"Only CUNY-PKU used the 20K unlabeled parallel data in English and French.",
"A common trend for many of the systems was the use of cross-lingual projection or transfer (MaskParse@Deskiñ, HLT@SUDA, TüPa, GCN-Sem, CUNY-PKU and XLangMo).",
"This was necessary for French, and was found helpful for German as well (CUNY-PKU).",
"Table 4 shows the labeled and unlabeled F1 for primary and remote edges, for each system in each track.",
"Overall F1 (All) is the F1 calculated over both primary and remote edges.",
"Full results are available online.",
"16 Figure 3 shows the fine-grained evaluation by labeled F1 per UCCA category, for each system in each track.",
"While Ground edges were uniformly 16 http://bit.ly/semeval2019task1results difficult to parse due to their sparsity in the training data, Relators were the easiest for all systems, as they are both common and predictable.",
"The Process/State distinction proved challenging, and most main relations were identified as the more common Process category.",
"The winning system in most tracks (HLT@SUDA) performed better on almost all categories.",
"Its largest advantage was on Parallel Scenes and Linkers, showing was especially successful at identifying Scene boundaries relative to the other systems, which requires a good understanding of syntax.",
"Results Discussion The HLT@SUDA system participated in all the tracks, obtaining the first place in the six English and German tracks and the second place in the French open track.",
"The system is based on the conversion of UCCA graphs into constituency trees, marking remote and discontinuous edges for recovery.",
"The classification-based recovery of the remote edges is performed simultaneously with the constituency parsing in a multi-task learning framework.",
"This work, which further connects between semantic and syntactic parsing, proposes a recovery mechanism that can be applied to other grammatical formalisms, enabling the conversion of a given formalism to another one for parsing.",
"The idea of this system is inspired by the pseudo non-projective dependency parsing approach proposed by Nivre and Nilsson (2005) .",
"The MaskParse@Deskiñ system only participated to the French open track, focusing on crosslingual parsing.",
"The system uses a semantic tagger, implemented with a bidirectional GRU and a masking mechanism to recursively extract the inner semantic structures in the graph.",
"Multilingual word embeddings are also used.",
"Using the English and German training data as well as the small French trial data for training, the parser ranked fourth in the French open track with a labeled F1 score of 65.4%, suggesting that this new model could be useful for low-resource languages.",
"The Tüpa system takes a transition-based approach, building on the TUPA transition system and oracle, but modifies its feature representations.",
"Specifically, instead of representing the parser configuration using LSTMs over the partially parsed graph, stack and buffer, they use feedforward networks with ELMo contextualized embeddings.",
"The stack and buffer are represented by the top three items on them.",
"For the partially parsed graph, they extract the rightmost and leftmost parents and children of the respective items, and represent them by the ELMo embedding of their form, the embedding of their dependency heads (for terminals, for non-terminals this is replaced with a learned embedding) and the embeddings of all terminal children.",
"Results are generally on-par with the TUPA baseline, and surpass it from the out-of-domain English setting.",
"This suggests that the TUPA architecture may be simplified, without compromising performance.",
"The UC Davis system participated only in the English closed track, where they achieved the second highest score, on par with TUPA.",
"The proposed parser has an encoder-decoder architecture, where the encoder is a simple BiLSTM encoder for each span of words.",
"The decoder iteratively and greedily traverses the sentence, and attempts to predict span boundaries.",
"The basic algorithm yields an unlabeled contiguous phrase-based tree, but additional modules predict the labels of the spans, discontiguous units (by joining together spans from the contiguous tree under a new node), and remote edges.",
"The work is inspired by Kitaev and Klein (2018) , who used similar methods for constituency parsing.",
"The GCN-SEM system uses a BiLSTM encoder, and predicts bi-lexical semantic dependencies (in the SDP format) using word, token and syntactic dependency parses.",
"The latter is incorporated into the network with a graph convolutional network (GCN).",
"The team participated in the English and German closed tracks, and were not among the highest-ranking teams.",
"However, scores on the UCCA test sets converted to the bi-lexical CoNLL-U format were rather high, implying that the lossy conversion could be much of the reason.",
"The CUNY-PKU system was based on an ensemble.",
"The ensemble included variations of TUPA parser, namely the MLP and BiLSTM models (Hershcovich et al., 2017) and the BiLSTM model with an additional MLP.",
"The system also proposes a way to aggregate the ensemble going through CKY parsing and accounting for remotes and discontinuous spans.",
"The team participated in all tracks, including additional information in the open domain, notably synthetic data based on automatically translating annotated texts.",
"Their system ranks first in the French open track.",
"The DANGNT@UIT.VNU-HCM system partic-ipated only in the English Wiki open and closed tracks.",
"The system is based on graph transformations from dependency trees into UCCA, using heuristics to create non-terminal nodes and map the dependency relations to UCCA categories.",
"The manual rules were developed based on the training and development data.",
"As the system converts trees to trees and does not add reentrancies, it does not produce remote edges.",
"While the results are not among the highest-ranking in the task, the primary labeled F1 score of 71.1% in the English open track shows that a rule-based system on top of a leading dependency parser (the Stanford parser) can obtain reasonable results for this task.",
"Conclusion The task has yielded substantial improvements to UCCA parsing in all settings.",
"Given that the best reported results were achieved with different parsing and learning approaches than the baseline model TUPA (which has been the only available parser for UCCA), the task opens a variety of paths for future improvement.",
"Cross-lingual transfer, which capitalizes on UCCA's tendency to be preserved in translation, was employed by a number of systems and has proven remarkably effective.",
"Indeed, the high scores obtained for French parsing in a low-resource setting suggest that high quality UCCA parsing can be straightforwardly extended to additional languages, with only a minimal amount of manual labor.",
"Moreover, given the conceptual similarity between the different semantic representations , it is likely the parsers developed for the shared task will directly contribute to the development of other semantic parsing technology.",
"Such a contribution is facilitated by the available conversion scripts available between UCCA and other formats."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"5",
"2",
"6",
"8",
"9"
],
"paper_header_content": [
"Overview",
"Task Definition",
"Data & Resources",
"TUPA: The Baseline Parser",
"Evaluation",
"·",
"Participating Systems",
"Discussion",
"Conclusion"
]
} | GEM-SciDuet-train-25#paper-1026#slide-10 | Main Findings | HLT@SUDA won 6/7 tracks:
Neural constituency parser + multi-task + BERT
French: trained on all languages, with language embedding
CUNY-PekingU won the French (open) track:
TUPA ensemble + synthetic data by machine translation
Surprisingly, results in French were close to English and German
Demonstrates viability of cross-lingual UCCA parsing
Is this because of UCCAs stability in translation? | HLT@SUDA won 6/7 tracks:
Neural constituency parser + multi-task + BERT
French: trained on all languages, with language embedding
CUNY-PekingU won the French (open) track:
TUPA ensemble + synthetic data by machine translation
Surprisingly, results in French were close to English and German
Demonstrates viability of cross-lingual UCCA parsing
Is this because of UCCAs stability in translation? | [] |
GEM-SciDuet-train-25#paper-1026#slide-11 | 1026 | SemEval-2019 Task 1: Cross-lingual Semantic Parsing with UCCA | We present the SemEval 2019 shared task on Universal Conceptual Cognitive Annotation (UCCA) parsing in English, German and French, and discuss the participating systems and results. UCCA is a crosslinguistically applicable framework for semantic representation, which builds on extensive typological work and supports rapid annotation. UCCA poses a challenge for existing parsing techniques, as it exhibits reentrancy (resulting in DAG structures), discontinuous structures and non-terminal nodes corresponding to complex semantic units. The shared task has yielded improvements over the state-of-the-art baseline in all languages and settings. Full results can be found in the task's website https://competitions. codalab.org/competitions/19160. 10 http://spacy.io 11 http://fasttext.cc | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159
],
"paper_content_text": [
"Overview Semantic representation is receiving growing attention in NLP in the past few years, and many proposals for semantic schemes have recently been put forth.",
"Examples include Abstract Meaning Representation (AMR; Banarescu et al., 2013) , Broad-coverage Semantic Dependencies (SDP; Oepen et al., 2016) , Universal Decompositional Semantics (UDS; White et al., 2016) , Parallel Meaning Bank (Abzianidze et al., 2017) , and Universal Conceptual Cognitive Annotation (UCCA; Abend and Rappoport, 2013) .",
"These advances in semantic representation, along with corresponding advances in semantic parsing, can potentially benefit essentially all text understanding tasks, and have already demonstrated applicability to a variety of tasks, including summarization (Liu et al., 2015; Dohare and Karnick, 2017) , paraphrase detection (Issa et al., 2018) , and semantic evaluation (using UCCA; see below).",
"In this shared task, we focus on UCCA parsing in multiple languages.",
"One of our goals is to benefit semantic parsing in languages with less annotated resources by making use of data from more resource-rich languages.",
"We refer to this approach as cross-lingual parsing, while other works (Zhang et al., 2017 (Zhang et al., , 2018 define cross-lingual parsing as the task of parsing text in one language to meaning representation in another language.",
"In addition to its potential applicative value, work on semantic parsing poses interesting algorithmic and modeling challenges, which are often different from those tackled in syntactic parsing, including reentrancy (e.g., for sharing arguments across predicates), and the modeling of the interface with lexical semantics.",
"UCCA is a cross-linguistically applicable semantic representation scheme, building on the established Basic Linguistic Theory typological framework (Dixon, 2010b (Dixon, ,a, 2012 .",
"It has demonstrated applicability to multiple languages, including English, French and German, and pilot annotation projects were conducted on a few languages more.",
"UCCA structures have been shown to be well-preserved in translation (Sulem et al., 2015) , and to support rapid annotation by nonexperts, assisted by an accessible annotation interface .",
"1 UCCA has already shown applicative value for text simplifica- Scene Elements P Process The main relation of a Scene that evolves in time (usually an action or movement).",
"S State The main relation of a Scene that does not evolve in time.",
"A Participant Scene participant (including locations, abstract entities and Scenes serving as arguments).",
"D Adverbial A secondary relation in a Scene.",
"Elements of Non-Scene Units C Center Necessary for the conceptualization of the parent unit.",
"E Elaborator A non-Scene relation applying to a single Center.",
"N Connector A non-Scene relation applying to two or more Centers, highlighting a common feature.",
"R Relator All other types of non-Scene relations: (1) Rs that relate a C to some super-ordinate relation, and (2) Rs that relate two Cs pertaining to different aspects of the parent unit.",
"Inter-Scene Relations H Parallel Scene A Scene linked to other Scenes by regular linkage (e.g., temporal, logical, purposive).",
"L Linker A relation between two or more Hs (e.g., \"when\", \"if\", \"in order to\").",
"G Ground A relation between the speech event and the uttered Scene (e.g., \"surprisingly\").",
"Other F Function Does not introduce a relation or participant.",
"Required by some structural pattern.",
"tion (Sulem et al., 2018b) , as well as for defining semantic evaluation measures for text-to-text generation tasks, including machine translation (Birch et al., 2016) , text simplification (Sulem et al., 2018a) and grammatical error correction (Choshen and Abend, 2018) .",
"The shared task defines a number of tracks, based on the different corpora and the availability of external resources (see §5).",
"It received submissions from eight research groups around the world.",
"In all settings at least one of the submitted systems improved over the state-of-the-art TUPA parser (Hershcovich et al., 2017 (Hershcovich et al., , 2018 , used as a baseline.",
"Task Definition UCCA represents the semantics of linguistic utterances as directed acyclic graphs (DAGs), where terminal (childless) nodes correspond to the text tokens, and non-terminal nodes to semantic units that participate in some super-ordinate relation.",
"Edges are labeled, indicating the role of a child in the relation the parent represents.",
"Nodes and edges belong to one of several layers, each corresponding to a \"module\" of semantic distinctions.",
"UCCA's foundational layer covers the predicate-argument structure evoked by predicates of all grammatical categories (verbal, nominal, adjectival and others), the inter-relations between them, and other major linguistic phenomena such as semantic heads and multi-word expressions.",
"It is the only layer for which annotated corpora exist at the moment, and is thus the target of this shared task.",
"The layer's basic notion is the Scene, describing a state, action, movement or some other relation that evolves in time.",
"Each Scene contains one main relation (marked as either a Process or a State), as well as one or more Participants.",
"For example, the sentence \"After graduation, John moved to Paris\" (Figure 1 ) contains two Scenes, whose main relations are \"graduation\" and \"moved\".",
"\"John\" is a Participant in both Scenes, while \"Paris\" only in the latter.",
"Further categories account for inter-Scene relations and the internal structure of complex arguments and relations (e.g., coordination and multi-word expressions).",
"Table 1 provides a concise description of the categories used by the UCCA foundational layer.",
"UCCA distinguishes primary edges, corresponding to explicit relations, from remote edges (appear dashed in Figure 1 ) that allow for a unit to participate in several super-ordinate relations.",
"Primary edges form a tree in each layer, whereas remote edges enable reentrancy, forming a DAG.",
"UCCA graphs may contain implicit units with no correspondent in the text.",
"Figure 2 shows the annotation for the sentence \"A similar technique is almost impossible to apply to other crops, such as cotton, soybeans and rice.\"",
"2 It includes a single Scene, whose main relation is \"apply\", a secondary relation \"almost impossible\", as well as two complex arguments: \"a similar technique\" and the coordinated argument \"such as cotton, soybeans, and rice.\"",
"In addition, the Scene includes an implicit argument, which represents the agent of the \"apply\" relation.",
"While parsing technology is well-established for syntactic parsing, UCCA has several formal properties that distinguish it from syntactic representations, mostly UCCA's tendency to abstract away from syntactic detail that do not affect argument structure.",
"For instance, consider the following examples where the concept of a Scene has a different rationale from the syntactic concept of a clause.",
"First, non-verbal predicates in UCCA are represented like verbal ones, such as when they appear in copula clauses or noun phrases.",
"Indeed, in Figure 1 , \"graduation\" and \"moved\" are considered separate Scenes, despite appearing in the same clause.",
"Second, in the same example, \"John\" is marked as a (remote) Participant in the graduation Scene, despite not being explicitly mentioned.",
"Third, consider the possessive construction in \"John's trip home\".",
"While in UCCA \"trip\" evokes a Scene in which \"John\" is a Participant, a syntactic scheme would analyze this phrase similarly to \"John's shoes\".",
"The differences in the challenges posed by syntactic parsing and UCCA parsing, and more generally by semantic parsing, motivate the development of targeted parsing technology to tackle it.",
"Data & Resources All UCCA corpora are freely available.",
"3 For English, we use v1.2.3 of the Wikipedia UCCA corpus (Wiki), v1.2.2 of the UCCA Twenty Thousand Leagues Under the Sea English-French parallel corpus (20K), which includes UCCA manual annotation for the first five chapters in French and English, and v1.0.1 of the UCCA German Twenty 3 https://github.com/ UniversalConceptualCognitiveAnnotation Thousand Leagues Under the Sea corpus, which includes the entire book in German.",
"For consistent annotation, we replace any Time and Quantifier labels with Adverbial and Elaborator in these data sets.",
"The resulting training, development 4 and test sets 5 are publicly available, and the splits are given in Table 2 .",
"Statistics on various structural properties are given in Table 3 .",
"The corpora were manually annotated according to v1.2 of the UCCA guidelines, 6 and reviewed by a second annotator.",
"All data was passed through automatic validation and normalization scripts.",
"7 The goal of validation is to rule out cases that are inconsistent with the UCCA annotation guidelines.",
"For example, a Scene, defined by the presence of a Process or a State, should include at least one Participant.",
"Due to the small amount of annotated data available for French, we only provided a minimal training set of 15 sentences, in addition to the development and test set.",
"Systems for French were expected to pursue semi-supervised approaches, such as cross-lingual learning or structure projection, leveraging the parallel nature of the corpus, or to rely on datasets for related formalisms, such as Universal Dependencies (Nivre et al., 2016) .",
"The full unannotated 20K Leagues corpus in English and French was released as well, in order to facilitate pursuing cross-lingual approaches.",
"Datasets were released in an XML format, including tokenized text automatically pre- processed using spaCy (see §5), and gold-standard UCCA annotation for the train and development sets.",
"8 To facilitate the use of existing NLP tools, we also released the data in SDP, AMR, CoNLL-U and plain text formats.",
"TUPA: The Baseline Parser We use the TUPA parser, the only parser for UCCA at the time the task was announced, as a baseline (Hershcovich et al., 2017 (Hershcovich et al., , 2018 .",
"TUPA is a transition-based DAG parser based on a BiLSTM-based classifier.",
"9 TUPA in itself has been found superior to a number of conversionbased parsers that use existing parsers for other formalisms to parse UCCA by constructing a twoway conversion protocol between the formalisms.",
"It can thus be regarded as a strong baseline for sys-8 https://github.com/ UniversalConceptualCognitiveAnnotation/ docs/blob/master/FORMAT.md 9 https://github.com/huji-nlp/tupa tem submissions to the shared task.",
"Evaluation Tracks.",
"Participants in the task were evaluated in four settings: In order to allow both even ground comparison between systems and using hitherto untried resources, we held both an open and a closed track for submissions in the English and German settings.",
"Closed track submissions were only allowed to use the gold-standard UCCA annotation distributed for the task in the target language, and were limited in their use of additional resources.",
"Concretely, the only additional data they were allowed to use is that used by TUPA, which consists of automatic annotations provided by spaCy: 10 POS tags, syntactic dependency relations, and named entity types and spans.",
"In addition, the closed track only allowed the use of word embeddings provided by fastText (Bojanowski et al., 2017 ) 11 for all languages.",
"Systems in the open track, on the other hand, were allowed to use any additional resource, such as UCCA annotation in other languages, dictionaries or datasets for other tasks, provided that they make sure not to use any additional gold standard annotation over the same text used in the UCCA corpora.",
"12 In both tracks, we required that submitted systems are not trained on the development data.",
"We only held an open track for French, due to the paucity of training data.",
"The four settings and two tracks result in a total of 7 competitions.",
"Scoring.",
"The following scores an output graph G 1 = (V 1 , E 1 ) against a gold one, G 2 = (V 2 , E 2 ), over the same sequence of terminals (tokens) W .",
"For a node v in V 1 or V 2 , define yield(v) ⊆ W as is its set of terminal descendants.",
"A pair of edges (v 1 , u 1 ) ∈ E 1 and (v 2 , u 2 ) ∈ E 2 with labels (categories) 1 , 2 is matching if yield(u 1 ) = yield(u 2 ) and 1 = 2 .",
"Labeled precision and recall are defined by dividing the number of matching edges in G 1 and G 2 by |E 1 | and |E 2 |, respectively.",
"F 1 is their harmonic mean: · Precision · Recall Precision + Recall Unlabeled precision, recall and F 1 are the same, but without requiring that 1 = 2 for the edges to match.",
"We evaluate these measures for primary and remote edges separately.",
"For a more finegrained evaluation, we additionally report precision, recall and F 1 on edges of each category.",
"13 Participating Systems We received a total of eight submissions to the different tracks: MaskParse@Deskiñ 12 We are not aware of any such annotation, but include this restriction for completeness.",
"13 The official evaluation script providing both coarse-grained and fine-grained scores can be found in https://github.com/huji-nlp/ucca/blob/ master/scripts/evaluate_standard.py.",
"14 It was later discovered that CUNY-PekingU used some of the evaluation data for training in the open tracks, and they were thus disqualified from these tracks.",
"In terms of parsing approaches, the task was quite varied.",
"HLT@SUDA converted UCCA graphs to constituency trees and trained a constituency parser and a recovery mechanism of remote edges in a multi-task framework.",
"MaskParse@Deskiñ used a bidirectional GRU tagger with a masking mechanism.",
"Tüpa and XLangMo used a transition-based approach.",
"UC Davis used an encoder-decoder architecture.",
"GCN-SEM uses a BiLSTM model to predict Semantic Dependency Parsing tags, when the syntactic dependency tree is given in the input.",
"CUNY-PKU is based on an ensemble that includes different variations of the TUPA parser.",
"DAN-GNT@UIT.VNU-HCM converted syntactic dependency trees to UCCA graphs.",
"Different systems handled remote edges differently.",
"DANGNT@UIT.VNU-HCM and GCN-SEM ignored remote edges.",
"UC Davis used a different BiLSTM for remote edges.",
"HLT@SUDA marked remote edges when converting the graph to a constituency tree and trained a classification model for their recovery.",
"MaskParse@Deskiñ handles remote edges by detecting arguments that are outside of the parent's node span using a detection threshold on the output probabilities.",
"In terms of using the data, all teams but one used the UCCA XML format, two used the CoNLL-U format, which is derived by a lossy conversion process, and only one team found the other data formats helpful.",
"One of the teams (MaskParse@Deskiñ) built a new training data adapted to their model by repeating each sentence N times, N being the number of non-terminal nodes in the UCCA graphs.",
"Three of the teams adapted the baseline TUPA parser, or parts of it to form their parser, namely TüPa, CUNY-PekingU and XLangMo; HLT@SUDA used a constituency parser (Stern et al., 2017) as a component in their model; DANGNT@UIT.VNU-HCM is a rule-based system over the Stanford Parser, and the rest are newly constructed parsers.",
"All teams found it useful to use external resources beyond those provided by the Shared Task.",
"Four submissions used external embeddings, MUSE (Conneau et al., 2017) in the case of MaskParse@Deskiñ and XLangMo, ELMo (Peters et al., 2018) in the case of TüPa, 15 and BERT (Devlin et al., 2019) in the case of HLT@SUDA.",
"Other resources included additional unlabeled data (TüPa), a list of multi-word expressions (MaskParse@Deskiñ), and the Stanford parser in the case of DANGNT@UIT.VNU-HCM.",
"Only CUNY-PKU used the 20K unlabeled parallel data in English and French.",
"A common trend for many of the systems was the use of cross-lingual projection or transfer (MaskParse@Deskiñ, HLT@SUDA, TüPa, GCN-Sem, CUNY-PKU and XLangMo).",
"This was necessary for French, and was found helpful for German as well (CUNY-PKU).",
"Table 4 shows the labeled and unlabeled F1 for primary and remote edges, for each system in each track.",
"Overall F1 (All) is the F1 calculated over both primary and remote edges.",
"Full results are available online.",
"16 Figure 3 shows the fine-grained evaluation by labeled F1 per UCCA category, for each system in each track.",
"While Ground edges were uniformly 16 http://bit.ly/semeval2019task1results difficult to parse due to their sparsity in the training data, Relators were the easiest for all systems, as they are both common and predictable.",
"The Process/State distinction proved challenging, and most main relations were identified as the more common Process category.",
"The winning system in most tracks (HLT@SUDA) performed better on almost all categories.",
"Its largest advantage was on Parallel Scenes and Linkers, showing was especially successful at identifying Scene boundaries relative to the other systems, which requires a good understanding of syntax.",
"Results Discussion The HLT@SUDA system participated in all the tracks, obtaining the first place in the six English and German tracks and the second place in the French open track.",
"The system is based on the conversion of UCCA graphs into constituency trees, marking remote and discontinuous edges for recovery.",
"The classification-based recovery of the remote edges is performed simultaneously with the constituency parsing in a multi-task learning framework.",
"This work, which further connects between semantic and syntactic parsing, proposes a recovery mechanism that can be applied to other grammatical formalisms, enabling the conversion of a given formalism to another one for parsing.",
"The idea of this system is inspired by the pseudo non-projective dependency parsing approach proposed by Nivre and Nilsson (2005) .",
"The MaskParse@Deskiñ system only participated to the French open track, focusing on crosslingual parsing.",
"The system uses a semantic tagger, implemented with a bidirectional GRU and a masking mechanism to recursively extract the inner semantic structures in the graph.",
"Multilingual word embeddings are also used.",
"Using the English and German training data as well as the small French trial data for training, the parser ranked fourth in the French open track with a labeled F1 score of 65.4%, suggesting that this new model could be useful for low-resource languages.",
"The Tüpa system takes a transition-based approach, building on the TUPA transition system and oracle, but modifies its feature representations.",
"Specifically, instead of representing the parser configuration using LSTMs over the partially parsed graph, stack and buffer, they use feedforward networks with ELMo contextualized embeddings.",
"The stack and buffer are represented by the top three items on them.",
"For the partially parsed graph, they extract the rightmost and leftmost parents and children of the respective items, and represent them by the ELMo embedding of their form, the embedding of their dependency heads (for terminals, for non-terminals this is replaced with a learned embedding) and the embeddings of all terminal children.",
"Results are generally on-par with the TUPA baseline, and surpass it from the out-of-domain English setting.",
"This suggests that the TUPA architecture may be simplified, without compromising performance.",
"The UC Davis system participated only in the English closed track, where they achieved the second highest score, on par with TUPA.",
"The proposed parser has an encoder-decoder architecture, where the encoder is a simple BiLSTM encoder for each span of words.",
"The decoder iteratively and greedily traverses the sentence, and attempts to predict span boundaries.",
"The basic algorithm yields an unlabeled contiguous phrase-based tree, but additional modules predict the labels of the spans, discontiguous units (by joining together spans from the contiguous tree under a new node), and remote edges.",
"The work is inspired by Kitaev and Klein (2018) , who used similar methods for constituency parsing.",
"The GCN-SEM system uses a BiLSTM encoder, and predicts bi-lexical semantic dependencies (in the SDP format) using word, token and syntactic dependency parses.",
"The latter is incorporated into the network with a graph convolutional network (GCN).",
"The team participated in the English and German closed tracks, and were not among the highest-ranking teams.",
"However, scores on the UCCA test sets converted to the bi-lexical CoNLL-U format were rather high, implying that the lossy conversion could be much of the reason.",
"The CUNY-PKU system was based on an ensemble.",
"The ensemble included variations of TUPA parser, namely the MLP and BiLSTM models (Hershcovich et al., 2017) and the BiLSTM model with an additional MLP.",
"The system also proposes a way to aggregate the ensemble going through CKY parsing and accounting for remotes and discontinuous spans.",
"The team participated in all tracks, including additional information in the open domain, notably synthetic data based on automatically translating annotated texts.",
"Their system ranks first in the French open track.",
"The DANGNT@UIT.VNU-HCM system partic-ipated only in the English Wiki open and closed tracks.",
"The system is based on graph transformations from dependency trees into UCCA, using heuristics to create non-terminal nodes and map the dependency relations to UCCA categories.",
"The manual rules were developed based on the training and development data.",
"As the system converts trees to trees and does not add reentrancies, it does not produce remote edges.",
"While the results are not among the highest-ranking in the task, the primary labeled F1 score of 71.1% in the English open track shows that a rule-based system on top of a leading dependency parser (the Stanford parser) can obtain reasonable results for this task.",
"Conclusion The task has yielded substantial improvements to UCCA parsing in all settings.",
"Given that the best reported results were achieved with different parsing and learning approaches than the baseline model TUPA (which has been the only available parser for UCCA), the task opens a variety of paths for future improvement.",
"Cross-lingual transfer, which capitalizes on UCCA's tendency to be preserved in translation, was employed by a number of systems and has proven remarkably effective.",
"Indeed, the high scores obtained for French parsing in a low-resource setting suggest that high quality UCCA parsing can be straightforwardly extended to additional languages, with only a minimal amount of manual labor.",
"Moreover, given the conceptual similarity between the different semantic representations , it is likely the parsers developed for the shared task will directly contribute to the development of other semantic parsing technology.",
"Such a contribution is facilitated by the available conversion scripts available between UCCA and other formats."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"5",
"2",
"6",
"8",
"9"
],
"paper_header_content": [
"Overview",
"Task Definition",
"Data & Resources",
"TUPA: The Baseline Parser",
"Evaluation",
"·",
"Participating Systems",
"Discussion",
"Conclusion"
]
} | GEM-SciDuet-train-25#paper-1026#slide-11 | Conclusion | Substantial improvements to UCCA parsing
High variety of methods
Thanks! Annotators, organizers, participants
Daniel Hershcovich, Leshem Choshen, Elior Sulem,
Zohar Aizenbud, Ari Rappoport and Omri Abend
Please participate in the CoNLL 2019 Shared Task:
Cross-Framework Meaning Representation Parsing
SDP, EDS, AMR and UCCA mrp.nlpl.eu | Substantial improvements to UCCA parsing
High variety of methods
Thanks! Annotators, organizers, participants
Daniel Hershcovich, Leshem Choshen, Elior Sulem,
Zohar Aizenbud, Ari Rappoport and Omri Abend
Please participate in the CoNLL 2019 Shared Task:
Cross-Framework Meaning Representation Parsing
SDP, EDS, AMR and UCCA mrp.nlpl.eu | [] |
GEM-SciDuet-train-26#paper-1027#slide-0 | 1027 | Five Centuries of Monarchy in Korea: Mining the Text of the Annals of the Joseon Dynasty | We present a quantitative study of the Annals of the Joseon Dynasty, the daily written records of the five hundred years of a monarchy in Korea. We first introduce the corpus, which is a series of books describing the historical facts during the Joseon dynasty. We then define three categories of the monarchial ruling styles based on the written records and compare the twentyfive kings in the monarchy. Finally, we investigate how kings show different ruling styles for various topics within the corpus. Through this study, we introduce a very unique corpus of monarchial records that span an entire monarchy of five hundred years and illustrate how text mining can be applied to answer important historical questions. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84
],
"paper_content_text": [
"Introduction Historical documents are usually studied qualitatively by researchers focusing on a close reading of a small number of documents.",
"However, for a large corpus of historical texts, qualitative methods have limitations, thus quantitative approaches have been introduced recently (Moretti, 2005; Jockers, 2013) .",
"There is also research in applying text mining and natural language processing methods to identify patterns in a corpus of large and longitudinal documents (Mimno, 2012) .",
"In this paper, we introduce a unique corpus of historical documents from the written records that span almost five hundred years from the fourteenth century up to the late nineteenth century within the Korean peninsula.",
"We apply text mining to this corpus to show the power of a computational approach in answering historical questions.",
"We first introduce The Annals of the Joseon Dynasty (Chunchugwan, 1863) .",
"Joseon is the last monarchial nation in the Korean Peninsula from its founding in 1392 up to 1910.",
"The Annals of the Joseon Dynasty are a series of books of historical facts, recorded almost daily during the Joseon dynasty.",
"Whenever a king abdicated the throne, the Chunchugwan (office for annals compilation) updated the Annals for that king from all related official and unofficial documents.",
"The Annals contain political, economic, social and cultural topics during the corresponding time periods.",
"To illustrate the application of a text mining approach, we analyze each king's ruling style from the Annals of the Joseon dynasty.",
"Being a monarchial system, almost all decisions within the government are confirmed by the king, where the king can make the decision on his own, or after discussing it with the government officials.",
"We identify the patterns of each king's decision making and compare the patterns among the kings.",
"The results show interesting patterns of the kings' ruling styles, including the tendency to make arbitrary decisions of the kings who were later dethroned because of tyranny.",
"Additionally, we apply a topic model to the corpus and analyze the kings' ruling style for each topic.",
"The Annals of the Joseon Dynasty In this section, we describe the details of The Annals of the Joseon Dynasty (from here referred to as the AJD) (Chunchugwan, 1863) and our process for building a corpus of the AJD.",
"In its entirety, the AJD consists of records from twentyseven kings over 519 years.",
"However, the last two kings' (Gojong, Sunjong) books are usually excluded from research by historians because many facts are distorted.",
"We follow that convention and use the books of the first twenty-five kings.",
"These records, in their original Chinese text and in the Korean translations, are available publicly through Joseon was a monarchy, but a king could not make all decisions by himself.",
"Instead, Joseon adopted a government system that most of the public issues are discussed with the government officials (Park, 1983; Kim, 2008) before the king made the decisions, which are all recorded in the AJD.",
"Hence, by analyzing the decision making process in the AJD, we can understand each king's ruling style.",
"Categorizing ruling style In Joseon dynasty, the king was the final decision maker.",
"Even when the government officials discussed the public issues, a king's approval was needed.",
"We can categorize each king's decision making process into three types.",
"First, a king can order directly without discussion, which we call Arbitrary Decision (AD).",
"Second, a king can discuss an issue with the officials and then direct his order, which we call Discussion and Order (DO).",
"Third, a king can discuss an issue with the officials and then decide to follow the officials' suggestion, which we call Discussion and Follow (DF).",
"The difference between DO and DF is that in DO, the king acts aggressively with his own opinion.",
"From these observations, we ask two research questions: 1) Can we identify and categorize kings with different ruling styles?",
"2) Do kings' ruling styles differ depending on the topic?",
"Method To understand each king's ruling style, we first identify relevant articles that contain the king's decision making because many of the articles describe non-governmental affairs, such as the weather, or simple status reports.",
"These relevant articles contain direct quotations of the words of the king or the government official.",
"The original texts do not contain any quotation marks, but translators added them to distinguish explicit quotations, which we can use to identify these relevant articles.",
"Its size is 126K, 36% over all articles.",
"Each article contains who said what for an issue, and king's final actions are written mostly in the last part of the article.",
"For example, the underlined last sentence in Figure 1b says that the king followed the official's suggestion.",
"Hence, to identify king's action for each issue, we focus on the last sentence in each article.",
"First, we identify that the setence subject is the king, because some issues are dealt by others.",
"For example, Sunjo, Heonjong and Cheoljong's mother or grandmother ruled as regent, so her decisions are recorded in the AJD.",
"To identify the part of speech in Korean, we used HanNanum (Choi et al., 2012) .",
"And, we investigate the verbs that indicating decisions including order, follow, approval and reject.",
"We use sixty verbs that describe king's decision specifically, and table 2 shows example words.",
"Finally, we classify these decisions into three types: 1) the king orders without discussions with the officials, and we label them as AD, 2) the king orders, approves, or rejects verbs in which their original Chinese characters show active decision making by the king, and we label them as DO, and 3) the king follows or discusses verbs which show passive submission by the king, and we label them as DF.",
"To identify topics, we use a Bayesian topic model, LDA (Blei et al., 2003) .",
"We implement it using Gibbs sampling (Griffiths and Steyvers, 2004) , set 300 topics, and optimized hyperparameters after 100 iterations (Asuncion et al., 2009) .",
"We remove stopwords and words with document frequency of 30 or smaller.",
"Results and Discussions We investigate the difference of ruling style between kings.",
"We run multinomial test (Read and Cressie, 1988 ) between king's ruling style distributions.",
"Result shows that almost all kings are different significant from others (p < 0.001).",
"It means that each king has his own ruling style.",
"Figure 2 shows the distribution of each category of ruling style.",
"Overall, many kings do not act arbitrary.",
"They discuss about many of the national affairs with officials.",
"But, Taejo who is the founder of the Joseon dynasty shows high value of AD.",
"And Yeonsangun and Gwanghaegun who are evaluated as a tyrant also show high value of it.",
"So we can imagine that tyrants tend to act arbitrarily.",
"We also identified those kings whose ruling style differed most from other kings.",
"We use JS divergence which is the symmetric measure of the difference between two probability distributions.",
"We compute JS divergence with each king pair's ruling style distributions.",
"Result shows that Heonjong (0.1220) and Yeonsangun (0.0998) have highest distance value.",
"It means their ruling style are quite different from other kings.",
"Because Heonjong's grandmother governed the Joseon each year, so his actions are quite few.",
"But, unlike Yeonsangun, Gwanghaegun (0.0454) who is known as a tyrant has similar value mean distance from other kings (0.0434).",
"It means his ruling style is quite similar to other kings, and this result supports previous results in Korean historical study (Kye, 2008 ) that re-evaluate his reputation.",
"We investigate the difference of king's ruling style based on the topic.",
"We run multinomial test (Read and Cressie, 1988 ) between king's overall ruling style distribution and specific distribution given a topic.",
"Results show that some ruling styles given a topic are different significant from overall (p < 0.01).",
"It means that the king's ruling style when the topic is given is different from his usual style.",
"Table 3 shows examples of topic.",
"Figure 3 shows four kings' overall ruling style and specific one given a topic.",
"Comparing with the leftmost bars which is overall ruling style of the king, each ruling style given a topic is different from it.",
"And, we can see that kings show similar/different ruling style for a topic.",
"For example, kings tend to discuss and order (DO) to officials for retirement and remission topics.",
"And, Sejong the Great and Gwanghaegun discuss and follow (DF) officials' words for agricultural topic.",
"But, for grants topic, Yeonsangun and Gwanghaegun act more arbitrarily (AD) than overall ruling style, and Sejong the Great follows more official's opinions (DF).",
"Conclusion and Future Work We introduced long and large historical documents, The Annals of the Joseon Dynasty.",
"It contains lots of topics such as political, economic, social and cultural over 500 years.",
"We looked at the ruling style of kings in Joseon dynasty and its difference by topics by computational methods.",
"This is ongoing work, and we are looking to find more hidden structures in the AJD.",
"Currently, historians evaluate the king's reputations (Park, 2004; Lee, 2010) .",
"This evaluation is done by many aspects, but one of the important feature is king's ruling style (Kim, 2008) .",
"So we are looking to improve methods for analyzing ruling style more specifically.",
"For example, we will look at the relationship with officials, especially who can make the king follows his opinion.",
"This approach can be used to measure king's leadership."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"3.3",
"4"
],
"paper_header_content": [
"Introduction",
"The Annals of the Joseon Dynasty",
"Categorizing ruling style",
"Method",
"Results and Discussions",
"Conclusion and Future Work"
]
} | GEM-SciDuet-train-26#paper-1027#slide-0 | Leadership | A process of social influence in which a person can enlist the aid and support of others in the accomplishment of a common task [Chemers. 2014]
accomplishment of a common task [Chemers. 2014]
Get them to do something significant
Energizing people toward a goal [Mills. 2005] | A process of social influence in which a person can enlist the aid and support of others in the accomplishment of a common task [Chemers. 2014]
accomplishment of a common task [Chemers. 2014]
Get them to do something significant
Energizing people toward a goal [Mills. 2005] | [] |
GEM-SciDuet-train-26#paper-1027#slide-1 | 1027 | Five Centuries of Monarchy in Korea: Mining the Text of the Annals of the Joseon Dynasty | We present a quantitative study of the Annals of the Joseon Dynasty, the daily written records of the five hundred years of a monarchy in Korea. We first introduce the corpus, which is a series of books describing the historical facts during the Joseon dynasty. We then define three categories of the monarchial ruling styles based on the written records and compare the twentyfive kings in the monarchy. Finally, we investigate how kings show different ruling styles for various topics within the corpus. Through this study, we introduce a very unique corpus of monarchial records that span an entire monarchy of five hundred years and illustrate how text mining can be applied to answer important historical questions. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84
],
"paper_content_text": [
"Introduction Historical documents are usually studied qualitatively by researchers focusing on a close reading of a small number of documents.",
"However, for a large corpus of historical texts, qualitative methods have limitations, thus quantitative approaches have been introduced recently (Moretti, 2005; Jockers, 2013) .",
"There is also research in applying text mining and natural language processing methods to identify patterns in a corpus of large and longitudinal documents (Mimno, 2012) .",
"In this paper, we introduce a unique corpus of historical documents from the written records that span almost five hundred years from the fourteenth century up to the late nineteenth century within the Korean peninsula.",
"We apply text mining to this corpus to show the power of a computational approach in answering historical questions.",
"We first introduce The Annals of the Joseon Dynasty (Chunchugwan, 1863) .",
"Joseon is the last monarchial nation in the Korean Peninsula from its founding in 1392 up to 1910.",
"The Annals of the Joseon Dynasty are a series of books of historical facts, recorded almost daily during the Joseon dynasty.",
"Whenever a king abdicated the throne, the Chunchugwan (office for annals compilation) updated the Annals for that king from all related official and unofficial documents.",
"The Annals contain political, economic, social and cultural topics during the corresponding time periods.",
"To illustrate the application of a text mining approach, we analyze each king's ruling style from the Annals of the Joseon dynasty.",
"Being a monarchial system, almost all decisions within the government are confirmed by the king, where the king can make the decision on his own, or after discussing it with the government officials.",
"We identify the patterns of each king's decision making and compare the patterns among the kings.",
"The results show interesting patterns of the kings' ruling styles, including the tendency to make arbitrary decisions of the kings who were later dethroned because of tyranny.",
"Additionally, we apply a topic model to the corpus and analyze the kings' ruling style for each topic.",
"The Annals of the Joseon Dynasty In this section, we describe the details of The Annals of the Joseon Dynasty (from here referred to as the AJD) (Chunchugwan, 1863) and our process for building a corpus of the AJD.",
"In its entirety, the AJD consists of records from twentyseven kings over 519 years.",
"However, the last two kings' (Gojong, Sunjong) books are usually excluded from research by historians because many facts are distorted.",
"We follow that convention and use the books of the first twenty-five kings.",
"These records, in their original Chinese text and in the Korean translations, are available publicly through Joseon was a monarchy, but a king could not make all decisions by himself.",
"Instead, Joseon adopted a government system that most of the public issues are discussed with the government officials (Park, 1983; Kim, 2008) before the king made the decisions, which are all recorded in the AJD.",
"Hence, by analyzing the decision making process in the AJD, we can understand each king's ruling style.",
"Categorizing ruling style In Joseon dynasty, the king was the final decision maker.",
"Even when the government officials discussed the public issues, a king's approval was needed.",
"We can categorize each king's decision making process into three types.",
"First, a king can order directly without discussion, which we call Arbitrary Decision (AD).",
"Second, a king can discuss an issue with the officials and then direct his order, which we call Discussion and Order (DO).",
"Third, a king can discuss an issue with the officials and then decide to follow the officials' suggestion, which we call Discussion and Follow (DF).",
"The difference between DO and DF is that in DO, the king acts aggressively with his own opinion.",
"From these observations, we ask two research questions: 1) Can we identify and categorize kings with different ruling styles?",
"2) Do kings' ruling styles differ depending on the topic?",
"Method To understand each king's ruling style, we first identify relevant articles that contain the king's decision making because many of the articles describe non-governmental affairs, such as the weather, or simple status reports.",
"These relevant articles contain direct quotations of the words of the king or the government official.",
"The original texts do not contain any quotation marks, but translators added them to distinguish explicit quotations, which we can use to identify these relevant articles.",
"Its size is 126K, 36% over all articles.",
"Each article contains who said what for an issue, and king's final actions are written mostly in the last part of the article.",
"For example, the underlined last sentence in Figure 1b says that the king followed the official's suggestion.",
"Hence, to identify king's action for each issue, we focus on the last sentence in each article.",
"First, we identify that the setence subject is the king, because some issues are dealt by others.",
"For example, Sunjo, Heonjong and Cheoljong's mother or grandmother ruled as regent, so her decisions are recorded in the AJD.",
"To identify the part of speech in Korean, we used HanNanum (Choi et al., 2012) .",
"And, we investigate the verbs that indicating decisions including order, follow, approval and reject.",
"We use sixty verbs that describe king's decision specifically, and table 2 shows example words.",
"Finally, we classify these decisions into three types: 1) the king orders without discussions with the officials, and we label them as AD, 2) the king orders, approves, or rejects verbs in which their original Chinese characters show active decision making by the king, and we label them as DO, and 3) the king follows or discusses verbs which show passive submission by the king, and we label them as DF.",
"To identify topics, we use a Bayesian topic model, LDA (Blei et al., 2003) .",
"We implement it using Gibbs sampling (Griffiths and Steyvers, 2004) , set 300 topics, and optimized hyperparameters after 100 iterations (Asuncion et al., 2009) .",
"We remove stopwords and words with document frequency of 30 or smaller.",
"Results and Discussions We investigate the difference of ruling style between kings.",
"We run multinomial test (Read and Cressie, 1988 ) between king's ruling style distributions.",
"Result shows that almost all kings are different significant from others (p < 0.001).",
"It means that each king has his own ruling style.",
"Figure 2 shows the distribution of each category of ruling style.",
"Overall, many kings do not act arbitrary.",
"They discuss about many of the national affairs with officials.",
"But, Taejo who is the founder of the Joseon dynasty shows high value of AD.",
"And Yeonsangun and Gwanghaegun who are evaluated as a tyrant also show high value of it.",
"So we can imagine that tyrants tend to act arbitrarily.",
"We also identified those kings whose ruling style differed most from other kings.",
"We use JS divergence which is the symmetric measure of the difference between two probability distributions.",
"We compute JS divergence with each king pair's ruling style distributions.",
"Result shows that Heonjong (0.1220) and Yeonsangun (0.0998) have highest distance value.",
"It means their ruling style are quite different from other kings.",
"Because Heonjong's grandmother governed the Joseon each year, so his actions are quite few.",
"But, unlike Yeonsangun, Gwanghaegun (0.0454) who is known as a tyrant has similar value mean distance from other kings (0.0434).",
"It means his ruling style is quite similar to other kings, and this result supports previous results in Korean historical study (Kye, 2008 ) that re-evaluate his reputation.",
"We investigate the difference of king's ruling style based on the topic.",
"We run multinomial test (Read and Cressie, 1988 ) between king's overall ruling style distribution and specific distribution given a topic.",
"Results show that some ruling styles given a topic are different significant from overall (p < 0.01).",
"It means that the king's ruling style when the topic is given is different from his usual style.",
"Table 3 shows examples of topic.",
"Figure 3 shows four kings' overall ruling style and specific one given a topic.",
"Comparing with the leftmost bars which is overall ruling style of the king, each ruling style given a topic is different from it.",
"And, we can see that kings show similar/different ruling style for a topic.",
"For example, kings tend to discuss and order (DO) to officials for retirement and remission topics.",
"And, Sejong the Great and Gwanghaegun discuss and follow (DF) officials' words for agricultural topic.",
"But, for grants topic, Yeonsangun and Gwanghaegun act more arbitrarily (AD) than overall ruling style, and Sejong the Great follows more official's opinions (DF).",
"Conclusion and Future Work We introduced long and large historical documents, The Annals of the Joseon Dynasty.",
"It contains lots of topics such as political, economic, social and cultural over 500 years.",
"We looked at the ruling style of kings in Joseon dynasty and its difference by topics by computational methods.",
"This is ongoing work, and we are looking to find more hidden structures in the AJD.",
"Currently, historians evaluate the king's reputations (Park, 2004; Lee, 2010) .",
"This evaluation is done by many aspects, but one of the important feature is king's ruling style (Kim, 2008) .",
"So we are looking to improve methods for analyzing ruling style more specifically.",
"For example, we will look at the relationship with officials, especially who can make the king follows his opinion.",
"This approach can be used to measure king's leadership."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"3.3",
"4"
],
"paper_header_content": [
"Introduction",
"The Annals of the Joseon Dynasty",
"Categorizing ruling style",
"Method",
"Results and Discussions",
"Conclusion and Future Work"
]
} | GEM-SciDuet-train-26#paper-1027#slide-1 | Leadership Styles | Get little input from group members
Control over all decisions
Give little guidance to group members
Leave them to decision-making
Encourage group members to participate
Retain the final say in the decision-making
How about the kings in the old times? | Get little input from group members
Control over all decisions
Give little guidance to group members
Leave them to decision-making
Encourage group members to participate
Retain the final say in the decision-making
How about the kings in the old times? | [] |
GEM-SciDuet-train-26#paper-1027#slide-2 | 1027 | Five Centuries of Monarchy in Korea: Mining the Text of the Annals of the Joseon Dynasty | We present a quantitative study of the Annals of the Joseon Dynasty, the daily written records of the five hundred years of a monarchy in Korea. We first introduce the corpus, which is a series of books describing the historical facts during the Joseon dynasty. We then define three categories of the monarchial ruling styles based on the written records and compare the twentyfive kings in the monarchy. Finally, we investigate how kings show different ruling styles for various topics within the corpus. Through this study, we introduce a very unique corpus of monarchial records that span an entire monarchy of five hundred years and illustrate how text mining can be applied to answer important historical questions. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84
],
"paper_content_text": [
"Introduction Historical documents are usually studied qualitatively by researchers focusing on a close reading of a small number of documents.",
"However, for a large corpus of historical texts, qualitative methods have limitations, thus quantitative approaches have been introduced recently (Moretti, 2005; Jockers, 2013) .",
"There is also research in applying text mining and natural language processing methods to identify patterns in a corpus of large and longitudinal documents (Mimno, 2012) .",
"In this paper, we introduce a unique corpus of historical documents from the written records that span almost five hundred years from the fourteenth century up to the late nineteenth century within the Korean peninsula.",
"We apply text mining to this corpus to show the power of a computational approach in answering historical questions.",
"We first introduce The Annals of the Joseon Dynasty (Chunchugwan, 1863) .",
"Joseon is the last monarchial nation in the Korean Peninsula from its founding in 1392 up to 1910.",
"The Annals of the Joseon Dynasty are a series of books of historical facts, recorded almost daily during the Joseon dynasty.",
"Whenever a king abdicated the throne, the Chunchugwan (office for annals compilation) updated the Annals for that king from all related official and unofficial documents.",
"The Annals contain political, economic, social and cultural topics during the corresponding time periods.",
"To illustrate the application of a text mining approach, we analyze each king's ruling style from the Annals of the Joseon dynasty.",
"Being a monarchial system, almost all decisions within the government are confirmed by the king, where the king can make the decision on his own, or after discussing it with the government officials.",
"We identify the patterns of each king's decision making and compare the patterns among the kings.",
"The results show interesting patterns of the kings' ruling styles, including the tendency to make arbitrary decisions of the kings who were later dethroned because of tyranny.",
"Additionally, we apply a topic model to the corpus and analyze the kings' ruling style for each topic.",
"The Annals of the Joseon Dynasty In this section, we describe the details of The Annals of the Joseon Dynasty (from here referred to as the AJD) (Chunchugwan, 1863) and our process for building a corpus of the AJD.",
"In its entirety, the AJD consists of records from twentyseven kings over 519 years.",
"However, the last two kings' (Gojong, Sunjong) books are usually excluded from research by historians because many facts are distorted.",
"We follow that convention and use the books of the first twenty-five kings.",
"These records, in their original Chinese text and in the Korean translations, are available publicly through Joseon was a monarchy, but a king could not make all decisions by himself.",
"Instead, Joseon adopted a government system that most of the public issues are discussed with the government officials (Park, 1983; Kim, 2008) before the king made the decisions, which are all recorded in the AJD.",
"Hence, by analyzing the decision making process in the AJD, we can understand each king's ruling style.",
"Categorizing ruling style In Joseon dynasty, the king was the final decision maker.",
"Even when the government officials discussed the public issues, a king's approval was needed.",
"We can categorize each king's decision making process into three types.",
"First, a king can order directly without discussion, which we call Arbitrary Decision (AD).",
"Second, a king can discuss an issue with the officials and then direct his order, which we call Discussion and Order (DO).",
"Third, a king can discuss an issue with the officials and then decide to follow the officials' suggestion, which we call Discussion and Follow (DF).",
"The difference between DO and DF is that in DO, the king acts aggressively with his own opinion.",
"From these observations, we ask two research questions: 1) Can we identify and categorize kings with different ruling styles?",
"2) Do kings' ruling styles differ depending on the topic?",
"Method To understand each king's ruling style, we first identify relevant articles that contain the king's decision making because many of the articles describe non-governmental affairs, such as the weather, or simple status reports.",
"These relevant articles contain direct quotations of the words of the king or the government official.",
"The original texts do not contain any quotation marks, but translators added them to distinguish explicit quotations, which we can use to identify these relevant articles.",
"Its size is 126K, 36% over all articles.",
"Each article contains who said what for an issue, and king's final actions are written mostly in the last part of the article.",
"For example, the underlined last sentence in Figure 1b says that the king followed the official's suggestion.",
"Hence, to identify king's action for each issue, we focus on the last sentence in each article.",
"First, we identify that the setence subject is the king, because some issues are dealt by others.",
"For example, Sunjo, Heonjong and Cheoljong's mother or grandmother ruled as regent, so her decisions are recorded in the AJD.",
"To identify the part of speech in Korean, we used HanNanum (Choi et al., 2012) .",
"And, we investigate the verbs that indicating decisions including order, follow, approval and reject.",
"We use sixty verbs that describe king's decision specifically, and table 2 shows example words.",
"Finally, we classify these decisions into three types: 1) the king orders without discussions with the officials, and we label them as AD, 2) the king orders, approves, or rejects verbs in which their original Chinese characters show active decision making by the king, and we label them as DO, and 3) the king follows or discusses verbs which show passive submission by the king, and we label them as DF.",
"To identify topics, we use a Bayesian topic model, LDA (Blei et al., 2003) .",
"We implement it using Gibbs sampling (Griffiths and Steyvers, 2004) , set 300 topics, and optimized hyperparameters after 100 iterations (Asuncion et al., 2009) .",
"We remove stopwords and words with document frequency of 30 or smaller.",
"Results and Discussions We investigate the difference of ruling style between kings.",
"We run multinomial test (Read and Cressie, 1988 ) between king's ruling style distributions.",
"Result shows that almost all kings are different significant from others (p < 0.001).",
"It means that each king has his own ruling style.",
"Figure 2 shows the distribution of each category of ruling style.",
"Overall, many kings do not act arbitrary.",
"They discuss about many of the national affairs with officials.",
"But, Taejo who is the founder of the Joseon dynasty shows high value of AD.",
"And Yeonsangun and Gwanghaegun who are evaluated as a tyrant also show high value of it.",
"So we can imagine that tyrants tend to act arbitrarily.",
"We also identified those kings whose ruling style differed most from other kings.",
"We use JS divergence which is the symmetric measure of the difference between two probability distributions.",
"We compute JS divergence with each king pair's ruling style distributions.",
"Result shows that Heonjong (0.1220) and Yeonsangun (0.0998) have highest distance value.",
"It means their ruling style are quite different from other kings.",
"Because Heonjong's grandmother governed the Joseon each year, so his actions are quite few.",
"But, unlike Yeonsangun, Gwanghaegun (0.0454) who is known as a tyrant has similar value mean distance from other kings (0.0434).",
"It means his ruling style is quite similar to other kings, and this result supports previous results in Korean historical study (Kye, 2008 ) that re-evaluate his reputation.",
"We investigate the difference of king's ruling style based on the topic.",
"We run multinomial test (Read and Cressie, 1988 ) between king's overall ruling style distribution and specific distribution given a topic.",
"Results show that some ruling styles given a topic are different significant from overall (p < 0.01).",
"It means that the king's ruling style when the topic is given is different from his usual style.",
"Table 3 shows examples of topic.",
"Figure 3 shows four kings' overall ruling style and specific one given a topic.",
"Comparing with the leftmost bars which is overall ruling style of the king, each ruling style given a topic is different from it.",
"And, we can see that kings show similar/different ruling style for a topic.",
"For example, kings tend to discuss and order (DO) to officials for retirement and remission topics.",
"And, Sejong the Great and Gwanghaegun discuss and follow (DF) officials' words for agricultural topic.",
"But, for grants topic, Yeonsangun and Gwanghaegun act more arbitrarily (AD) than overall ruling style, and Sejong the Great follows more official's opinions (DF).",
"Conclusion and Future Work We introduced long and large historical documents, The Annals of the Joseon Dynasty.",
"It contains lots of topics such as political, economic, social and cultural over 500 years.",
"We looked at the ruling style of kings in Joseon dynasty and its difference by topics by computational methods.",
"This is ongoing work, and we are looking to find more hidden structures in the AJD.",
"Currently, historians evaluate the king's reputations (Park, 2004; Lee, 2010) .",
"This evaluation is done by many aspects, but one of the important feature is king's ruling style (Kim, 2008) .",
"So we are looking to improve methods for analyzing ruling style more specifically.",
"For example, we will look at the relationship with officials, especially who can make the king follows his opinion.",
"This approach can be used to measure king's leadership."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"3.3",
"4"
],
"paper_header_content": [
"Introduction",
"The Annals of the Joseon Dynasty",
"Categorizing ruling style",
"Method",
"Results and Discussions",
"Conclusion and Future Work"
]
} | GEM-SciDuet-train-26#paper-1027#slide-2 | Autocratic | Control over all decisions | Control over all decisions | [] |
GEM-SciDuet-train-26#paper-1027#slide-3 | 1027 | Five Centuries of Monarchy in Korea: Mining the Text of the Annals of the Joseon Dynasty | We present a quantitative study of the Annals of the Joseon Dynasty, the daily written records of the five hundred years of a monarchy in Korea. We first introduce the corpus, which is a series of books describing the historical facts during the Joseon dynasty. We then define three categories of the monarchial ruling styles based on the written records and compare the twentyfive kings in the monarchy. Finally, we investigate how kings show different ruling styles for various topics within the corpus. Through this study, we introduce a very unique corpus of monarchial records that span an entire monarchy of five hundred years and illustrate how text mining can be applied to answer important historical questions. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84
],
"paper_content_text": [
"Introduction Historical documents are usually studied qualitatively by researchers focusing on a close reading of a small number of documents.",
"However, for a large corpus of historical texts, qualitative methods have limitations, thus quantitative approaches have been introduced recently (Moretti, 2005; Jockers, 2013) .",
"There is also research in applying text mining and natural language processing methods to identify patterns in a corpus of large and longitudinal documents (Mimno, 2012) .",
"In this paper, we introduce a unique corpus of historical documents from the written records that span almost five hundred years from the fourteenth century up to the late nineteenth century within the Korean peninsula.",
"We apply text mining to this corpus to show the power of a computational approach in answering historical questions.",
"We first introduce The Annals of the Joseon Dynasty (Chunchugwan, 1863) .",
"Joseon is the last monarchial nation in the Korean Peninsula from its founding in 1392 up to 1910.",
"The Annals of the Joseon Dynasty are a series of books of historical facts, recorded almost daily during the Joseon dynasty.",
"Whenever a king abdicated the throne, the Chunchugwan (office for annals compilation) updated the Annals for that king from all related official and unofficial documents.",
"The Annals contain political, economic, social and cultural topics during the corresponding time periods.",
"To illustrate the application of a text mining approach, we analyze each king's ruling style from the Annals of the Joseon dynasty.",
"Being a monarchial system, almost all decisions within the government are confirmed by the king, where the king can make the decision on his own, or after discussing it with the government officials.",
"We identify the patterns of each king's decision making and compare the patterns among the kings.",
"The results show interesting patterns of the kings' ruling styles, including the tendency to make arbitrary decisions of the kings who were later dethroned because of tyranny.",
"Additionally, we apply a topic model to the corpus and analyze the kings' ruling style for each topic.",
"The Annals of the Joseon Dynasty In this section, we describe the details of The Annals of the Joseon Dynasty (from here referred to as the AJD) (Chunchugwan, 1863) and our process for building a corpus of the AJD.",
"In its entirety, the AJD consists of records from twentyseven kings over 519 years.",
"However, the last two kings' (Gojong, Sunjong) books are usually excluded from research by historians because many facts are distorted.",
"We follow that convention and use the books of the first twenty-five kings.",
"These records, in their original Chinese text and in the Korean translations, are available publicly through Joseon was a monarchy, but a king could not make all decisions by himself.",
"Instead, Joseon adopted a government system that most of the public issues are discussed with the government officials (Park, 1983; Kim, 2008) before the king made the decisions, which are all recorded in the AJD.",
"Hence, by analyzing the decision making process in the AJD, we can understand each king's ruling style.",
"Categorizing ruling style In Joseon dynasty, the king was the final decision maker.",
"Even when the government officials discussed the public issues, a king's approval was needed.",
"We can categorize each king's decision making process into three types.",
"First, a king can order directly without discussion, which we call Arbitrary Decision (AD).",
"Second, a king can discuss an issue with the officials and then direct his order, which we call Discussion and Order (DO).",
"Third, a king can discuss an issue with the officials and then decide to follow the officials' suggestion, which we call Discussion and Follow (DF).",
"The difference between DO and DF is that in DO, the king acts aggressively with his own opinion.",
"From these observations, we ask two research questions: 1) Can we identify and categorize kings with different ruling styles?",
"2) Do kings' ruling styles differ depending on the topic?",
"Method To understand each king's ruling style, we first identify relevant articles that contain the king's decision making because many of the articles describe non-governmental affairs, such as the weather, or simple status reports.",
"These relevant articles contain direct quotations of the words of the king or the government official.",
"The original texts do not contain any quotation marks, but translators added them to distinguish explicit quotations, which we can use to identify these relevant articles.",
"Its size is 126K, 36% over all articles.",
"Each article contains who said what for an issue, and king's final actions are written mostly in the last part of the article.",
"For example, the underlined last sentence in Figure 1b says that the king followed the official's suggestion.",
"Hence, to identify king's action for each issue, we focus on the last sentence in each article.",
"First, we identify that the setence subject is the king, because some issues are dealt by others.",
"For example, Sunjo, Heonjong and Cheoljong's mother or grandmother ruled as regent, so her decisions are recorded in the AJD.",
"To identify the part of speech in Korean, we used HanNanum (Choi et al., 2012) .",
"And, we investigate the verbs that indicating decisions including order, follow, approval and reject.",
"We use sixty verbs that describe king's decision specifically, and table 2 shows example words.",
"Finally, we classify these decisions into three types: 1) the king orders without discussions with the officials, and we label them as AD, 2) the king orders, approves, or rejects verbs in which their original Chinese characters show active decision making by the king, and we label them as DO, and 3) the king follows or discusses verbs which show passive submission by the king, and we label them as DF.",
"To identify topics, we use a Bayesian topic model, LDA (Blei et al., 2003) .",
"We implement it using Gibbs sampling (Griffiths and Steyvers, 2004) , set 300 topics, and optimized hyperparameters after 100 iterations (Asuncion et al., 2009) .",
"We remove stopwords and words with document frequency of 30 or smaller.",
"Results and Discussions We investigate the difference of ruling style between kings.",
"We run multinomial test (Read and Cressie, 1988 ) between king's ruling style distributions.",
"Result shows that almost all kings are different significant from others (p < 0.001).",
"It means that each king has his own ruling style.",
"Figure 2 shows the distribution of each category of ruling style.",
"Overall, many kings do not act arbitrary.",
"They discuss about many of the national affairs with officials.",
"But, Taejo who is the founder of the Joseon dynasty shows high value of AD.",
"And Yeonsangun and Gwanghaegun who are evaluated as a tyrant also show high value of it.",
"So we can imagine that tyrants tend to act arbitrarily.",
"We also identified those kings whose ruling style differed most from other kings.",
"We use JS divergence which is the symmetric measure of the difference between two probability distributions.",
"We compute JS divergence with each king pair's ruling style distributions.",
"Result shows that Heonjong (0.1220) and Yeonsangun (0.0998) have highest distance value.",
"It means their ruling style are quite different from other kings.",
"Because Heonjong's grandmother governed the Joseon each year, so his actions are quite few.",
"But, unlike Yeonsangun, Gwanghaegun (0.0454) who is known as a tyrant has similar value mean distance from other kings (0.0434).",
"It means his ruling style is quite similar to other kings, and this result supports previous results in Korean historical study (Kye, 2008 ) that re-evaluate his reputation.",
"We investigate the difference of king's ruling style based on the topic.",
"We run multinomial test (Read and Cressie, 1988 ) between king's overall ruling style distribution and specific distribution given a topic.",
"Results show that some ruling styles given a topic are different significant from overall (p < 0.01).",
"It means that the king's ruling style when the topic is given is different from his usual style.",
"Table 3 shows examples of topic.",
"Figure 3 shows four kings' overall ruling style and specific one given a topic.",
"Comparing with the leftmost bars which is overall ruling style of the king, each ruling style given a topic is different from it.",
"And, we can see that kings show similar/different ruling style for a topic.",
"For example, kings tend to discuss and order (DO) to officials for retirement and remission topics.",
"And, Sejong the Great and Gwanghaegun discuss and follow (DF) officials' words for agricultural topic.",
"But, for grants topic, Yeonsangun and Gwanghaegun act more arbitrarily (AD) than overall ruling style, and Sejong the Great follows more official's opinions (DF).",
"Conclusion and Future Work We introduced long and large historical documents, The Annals of the Joseon Dynasty.",
"It contains lots of topics such as political, economic, social and cultural over 500 years.",
"We looked at the ruling style of kings in Joseon dynasty and its difference by topics by computational methods.",
"This is ongoing work, and we are looking to find more hidden structures in the AJD.",
"Currently, historians evaluate the king's reputations (Park, 2004; Lee, 2010) .",
"This evaluation is done by many aspects, but one of the important feature is king's ruling style (Kim, 2008) .",
"So we are looking to improve methods for analyzing ruling style more specifically.",
"For example, we will look at the relationship with officials, especially who can make the king follows his opinion.",
"This approach can be used to measure king's leadership."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"3.3",
"4"
],
"paper_header_content": [
"Introduction",
"The Annals of the Joseon Dynasty",
"Categorizing ruling style",
"Method",
"Results and Discussions",
"Conclusion and Future Work"
]
} | GEM-SciDuet-train-26#paper-1027#slide-3 | Research Questions | 1. Do kings show different kinds of leadership styles? | 1. Do kings show different kinds of leadership styles? | [] |
GEM-SciDuet-train-26#paper-1027#slide-4 | 1027 | Five Centuries of Monarchy in Korea: Mining the Text of the Annals of the Joseon Dynasty | We present a quantitative study of the Annals of the Joseon Dynasty, the daily written records of the five hundred years of a monarchy in Korea. We first introduce the corpus, which is a series of books describing the historical facts during the Joseon dynasty. We then define three categories of the monarchial ruling styles based on the written records and compare the twentyfive kings in the monarchy. Finally, we investigate how kings show different ruling styles for various topics within the corpus. Through this study, we introduce a very unique corpus of monarchial records that span an entire monarchy of five hundred years and illustrate how text mining can be applied to answer important historical questions. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84
],
"paper_content_text": [
"Introduction Historical documents are usually studied qualitatively by researchers focusing on a close reading of a small number of documents.",
"However, for a large corpus of historical texts, qualitative methods have limitations, thus quantitative approaches have been introduced recently (Moretti, 2005; Jockers, 2013) .",
"There is also research in applying text mining and natural language processing methods to identify patterns in a corpus of large and longitudinal documents (Mimno, 2012) .",
"In this paper, we introduce a unique corpus of historical documents from the written records that span almost five hundred years from the fourteenth century up to the late nineteenth century within the Korean peninsula.",
"We apply text mining to this corpus to show the power of a computational approach in answering historical questions.",
"We first introduce The Annals of the Joseon Dynasty (Chunchugwan, 1863) .",
"Joseon is the last monarchial nation in the Korean Peninsula from its founding in 1392 up to 1910.",
"The Annals of the Joseon Dynasty are a series of books of historical facts, recorded almost daily during the Joseon dynasty.",
"Whenever a king abdicated the throne, the Chunchugwan (office for annals compilation) updated the Annals for that king from all related official and unofficial documents.",
"The Annals contain political, economic, social and cultural topics during the corresponding time periods.",
"To illustrate the application of a text mining approach, we analyze each king's ruling style from the Annals of the Joseon dynasty.",
"Being a monarchial system, almost all decisions within the government are confirmed by the king, where the king can make the decision on his own, or after discussing it with the government officials.",
"We identify the patterns of each king's decision making and compare the patterns among the kings.",
"The results show interesting patterns of the kings' ruling styles, including the tendency to make arbitrary decisions of the kings who were later dethroned because of tyranny.",
"Additionally, we apply a topic model to the corpus and analyze the kings' ruling style for each topic.",
"The Annals of the Joseon Dynasty In this section, we describe the details of The Annals of the Joseon Dynasty (from here referred to as the AJD) (Chunchugwan, 1863) and our process for building a corpus of the AJD.",
"In its entirety, the AJD consists of records from twentyseven kings over 519 years.",
"However, the last two kings' (Gojong, Sunjong) books are usually excluded from research by historians because many facts are distorted.",
"We follow that convention and use the books of the first twenty-five kings.",
"These records, in their original Chinese text and in the Korean translations, are available publicly through Joseon was a monarchy, but a king could not make all decisions by himself.",
"Instead, Joseon adopted a government system that most of the public issues are discussed with the government officials (Park, 1983; Kim, 2008) before the king made the decisions, which are all recorded in the AJD.",
"Hence, by analyzing the decision making process in the AJD, we can understand each king's ruling style.",
"Categorizing ruling style In Joseon dynasty, the king was the final decision maker.",
"Even when the government officials discussed the public issues, a king's approval was needed.",
"We can categorize each king's decision making process into three types.",
"First, a king can order directly without discussion, which we call Arbitrary Decision (AD).",
"Second, a king can discuss an issue with the officials and then direct his order, which we call Discussion and Order (DO).",
"Third, a king can discuss an issue with the officials and then decide to follow the officials' suggestion, which we call Discussion and Follow (DF).",
"The difference between DO and DF is that in DO, the king acts aggressively with his own opinion.",
"From these observations, we ask two research questions: 1) Can we identify and categorize kings with different ruling styles?",
"2) Do kings' ruling styles differ depending on the topic?",
"Method To understand each king's ruling style, we first identify relevant articles that contain the king's decision making because many of the articles describe non-governmental affairs, such as the weather, or simple status reports.",
"These relevant articles contain direct quotations of the words of the king or the government official.",
"The original texts do not contain any quotation marks, but translators added them to distinguish explicit quotations, which we can use to identify these relevant articles.",
"Its size is 126K, 36% over all articles.",
"Each article contains who said what for an issue, and king's final actions are written mostly in the last part of the article.",
"For example, the underlined last sentence in Figure 1b says that the king followed the official's suggestion.",
"Hence, to identify king's action for each issue, we focus on the last sentence in each article.",
"First, we identify that the setence subject is the king, because some issues are dealt by others.",
"For example, Sunjo, Heonjong and Cheoljong's mother or grandmother ruled as regent, so her decisions are recorded in the AJD.",
"To identify the part of speech in Korean, we used HanNanum (Choi et al., 2012) .",
"And, we investigate the verbs that indicating decisions including order, follow, approval and reject.",
"We use sixty verbs that describe king's decision specifically, and table 2 shows example words.",
"Finally, we classify these decisions into three types: 1) the king orders without discussions with the officials, and we label them as AD, 2) the king orders, approves, or rejects verbs in which their original Chinese characters show active decision making by the king, and we label them as DO, and 3) the king follows or discusses verbs which show passive submission by the king, and we label them as DF.",
"To identify topics, we use a Bayesian topic model, LDA (Blei et al., 2003) .",
"We implement it using Gibbs sampling (Griffiths and Steyvers, 2004) , set 300 topics, and optimized hyperparameters after 100 iterations (Asuncion et al., 2009) .",
"We remove stopwords and words with document frequency of 30 or smaller.",
"Results and Discussions We investigate the difference of ruling style between kings.",
"We run multinomial test (Read and Cressie, 1988 ) between king's ruling style distributions.",
"Result shows that almost all kings are different significant from others (p < 0.001).",
"It means that each king has his own ruling style.",
"Figure 2 shows the distribution of each category of ruling style.",
"Overall, many kings do not act arbitrary.",
"They discuss about many of the national affairs with officials.",
"But, Taejo who is the founder of the Joseon dynasty shows high value of AD.",
"And Yeonsangun and Gwanghaegun who are evaluated as a tyrant also show high value of it.",
"So we can imagine that tyrants tend to act arbitrarily.",
"We also identified those kings whose ruling style differed most from other kings.",
"We use JS divergence which is the symmetric measure of the difference between two probability distributions.",
"We compute JS divergence with each king pair's ruling style distributions.",
"Result shows that Heonjong (0.1220) and Yeonsangun (0.0998) have highest distance value.",
"It means their ruling style are quite different from other kings.",
"Because Heonjong's grandmother governed the Joseon each year, so his actions are quite few.",
"But, unlike Yeonsangun, Gwanghaegun (0.0454) who is known as a tyrant has similar value mean distance from other kings (0.0434).",
"It means his ruling style is quite similar to other kings, and this result supports previous results in Korean historical study (Kye, 2008 ) that re-evaluate his reputation.",
"We investigate the difference of king's ruling style based on the topic.",
"We run multinomial test (Read and Cressie, 1988 ) between king's overall ruling style distribution and specific distribution given a topic.",
"Results show that some ruling styles given a topic are different significant from overall (p < 0.01).",
"It means that the king's ruling style when the topic is given is different from his usual style.",
"Table 3 shows examples of topic.",
"Figure 3 shows four kings' overall ruling style and specific one given a topic.",
"Comparing with the leftmost bars which is overall ruling style of the king, each ruling style given a topic is different from it.",
"And, we can see that kings show similar/different ruling style for a topic.",
"For example, kings tend to discuss and order (DO) to officials for retirement and remission topics.",
"And, Sejong the Great and Gwanghaegun discuss and follow (DF) officials' words for agricultural topic.",
"But, for grants topic, Yeonsangun and Gwanghaegun act more arbitrarily (AD) than overall ruling style, and Sejong the Great follows more official's opinions (DF).",
"Conclusion and Future Work We introduced long and large historical documents, The Annals of the Joseon Dynasty.",
"It contains lots of topics such as political, economic, social and cultural over 500 years.",
"We looked at the ruling style of kings in Joseon dynasty and its difference by topics by computational methods.",
"This is ongoing work, and we are looking to find more hidden structures in the AJD.",
"Currently, historians evaluate the king's reputations (Park, 2004; Lee, 2010) .",
"This evaluation is done by many aspects, but one of the important feature is king's ruling style (Kim, 2008) .",
"So we are looking to improve methods for analyzing ruling style more specifically.",
"For example, we will look at the relationship with officials, especially who can make the king follows his opinion.",
"This approach can be used to measure king's leadership."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"3.3",
"4"
],
"paper_header_content": [
"Introduction",
"The Annals of the Joseon Dynasty",
"Categorizing ruling style",
"Method",
"Results and Discussions",
"Conclusion and Future Work"
]
} | GEM-SciDuet-train-26#paper-1027#slide-4 | Dataset | What kinds of data are needed?
Discussions with government officials
Long and large dataset
Requirements: records of kings official duty activities
My answer: The Annals of the Joseon Dynasty | What kinds of data are needed?
Discussions with government officials
Long and large dataset
Requirements: records of kings official duty activities
My answer: The Annals of the Joseon Dynasty | [] |
GEM-SciDuet-train-26#paper-1027#slide-5 | 1027 | Five Centuries of Monarchy in Korea: Mining the Text of the Annals of the Joseon Dynasty | We present a quantitative study of the Annals of the Joseon Dynasty, the daily written records of the five hundred years of a monarchy in Korea. We first introduce the corpus, which is a series of books describing the historical facts during the Joseon dynasty. We then define three categories of the monarchial ruling styles based on the written records and compare the twentyfive kings in the monarchy. Finally, we investigate how kings show different ruling styles for various topics within the corpus. Through this study, we introduce a very unique corpus of monarchial records that span an entire monarchy of five hundred years and illustrate how text mining can be applied to answer important historical questions. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84
],
"paper_content_text": [
"Introduction Historical documents are usually studied qualitatively by researchers focusing on a close reading of a small number of documents.",
"However, for a large corpus of historical texts, qualitative methods have limitations, thus quantitative approaches have been introduced recently (Moretti, 2005; Jockers, 2013) .",
"There is also research in applying text mining and natural language processing methods to identify patterns in a corpus of large and longitudinal documents (Mimno, 2012) .",
"In this paper, we introduce a unique corpus of historical documents from the written records that span almost five hundred years from the fourteenth century up to the late nineteenth century within the Korean peninsula.",
"We apply text mining to this corpus to show the power of a computational approach in answering historical questions.",
"We first introduce The Annals of the Joseon Dynasty (Chunchugwan, 1863) .",
"Joseon is the last monarchial nation in the Korean Peninsula from its founding in 1392 up to 1910.",
"The Annals of the Joseon Dynasty are a series of books of historical facts, recorded almost daily during the Joseon dynasty.",
"Whenever a king abdicated the throne, the Chunchugwan (office for annals compilation) updated the Annals for that king from all related official and unofficial documents.",
"The Annals contain political, economic, social and cultural topics during the corresponding time periods.",
"To illustrate the application of a text mining approach, we analyze each king's ruling style from the Annals of the Joseon dynasty.",
"Being a monarchial system, almost all decisions within the government are confirmed by the king, where the king can make the decision on his own, or after discussing it with the government officials.",
"We identify the patterns of each king's decision making and compare the patterns among the kings.",
"The results show interesting patterns of the kings' ruling styles, including the tendency to make arbitrary decisions of the kings who were later dethroned because of tyranny.",
"Additionally, we apply a topic model to the corpus and analyze the kings' ruling style for each topic.",
"The Annals of the Joseon Dynasty In this section, we describe the details of The Annals of the Joseon Dynasty (from here referred to as the AJD) (Chunchugwan, 1863) and our process for building a corpus of the AJD.",
"In its entirety, the AJD consists of records from twentyseven kings over 519 years.",
"However, the last two kings' (Gojong, Sunjong) books are usually excluded from research by historians because many facts are distorted.",
"We follow that convention and use the books of the first twenty-five kings.",
"These records, in their original Chinese text and in the Korean translations, are available publicly through Joseon was a monarchy, but a king could not make all decisions by himself.",
"Instead, Joseon adopted a government system that most of the public issues are discussed with the government officials (Park, 1983; Kim, 2008) before the king made the decisions, which are all recorded in the AJD.",
"Hence, by analyzing the decision making process in the AJD, we can understand each king's ruling style.",
"Categorizing ruling style In Joseon dynasty, the king was the final decision maker.",
"Even when the government officials discussed the public issues, a king's approval was needed.",
"We can categorize each king's decision making process into three types.",
"First, a king can order directly without discussion, which we call Arbitrary Decision (AD).",
"Second, a king can discuss an issue with the officials and then direct his order, which we call Discussion and Order (DO).",
"Third, a king can discuss an issue with the officials and then decide to follow the officials' suggestion, which we call Discussion and Follow (DF).",
"The difference between DO and DF is that in DO, the king acts aggressively with his own opinion.",
"From these observations, we ask two research questions: 1) Can we identify and categorize kings with different ruling styles?",
"2) Do kings' ruling styles differ depending on the topic?",
"Method To understand each king's ruling style, we first identify relevant articles that contain the king's decision making because many of the articles describe non-governmental affairs, such as the weather, or simple status reports.",
"These relevant articles contain direct quotations of the words of the king or the government official.",
"The original texts do not contain any quotation marks, but translators added them to distinguish explicit quotations, which we can use to identify these relevant articles.",
"Its size is 126K, 36% over all articles.",
"Each article contains who said what for an issue, and king's final actions are written mostly in the last part of the article.",
"For example, the underlined last sentence in Figure 1b says that the king followed the official's suggestion.",
"Hence, to identify king's action for each issue, we focus on the last sentence in each article.",
"First, we identify that the setence subject is the king, because some issues are dealt by others.",
"For example, Sunjo, Heonjong and Cheoljong's mother or grandmother ruled as regent, so her decisions are recorded in the AJD.",
"To identify the part of speech in Korean, we used HanNanum (Choi et al., 2012) .",
"And, we investigate the verbs that indicating decisions including order, follow, approval and reject.",
"We use sixty verbs that describe king's decision specifically, and table 2 shows example words.",
"Finally, we classify these decisions into three types: 1) the king orders without discussions with the officials, and we label them as AD, 2) the king orders, approves, or rejects verbs in which their original Chinese characters show active decision making by the king, and we label them as DO, and 3) the king follows or discusses verbs which show passive submission by the king, and we label them as DF.",
"To identify topics, we use a Bayesian topic model, LDA (Blei et al., 2003) .",
"We implement it using Gibbs sampling (Griffiths and Steyvers, 2004) , set 300 topics, and optimized hyperparameters after 100 iterations (Asuncion et al., 2009) .",
"We remove stopwords and words with document frequency of 30 or smaller.",
"Results and Discussions We investigate the difference of ruling style between kings.",
"We run multinomial test (Read and Cressie, 1988 ) between king's ruling style distributions.",
"Result shows that almost all kings are different significant from others (p < 0.001).",
"It means that each king has his own ruling style.",
"Figure 2 shows the distribution of each category of ruling style.",
"Overall, many kings do not act arbitrary.",
"They discuss about many of the national affairs with officials.",
"But, Taejo who is the founder of the Joseon dynasty shows high value of AD.",
"And Yeonsangun and Gwanghaegun who are evaluated as a tyrant also show high value of it.",
"So we can imagine that tyrants tend to act arbitrarily.",
"We also identified those kings whose ruling style differed most from other kings.",
"We use JS divergence which is the symmetric measure of the difference between two probability distributions.",
"We compute JS divergence with each king pair's ruling style distributions.",
"Result shows that Heonjong (0.1220) and Yeonsangun (0.0998) have highest distance value.",
"It means their ruling style are quite different from other kings.",
"Because Heonjong's grandmother governed the Joseon each year, so his actions are quite few.",
"But, unlike Yeonsangun, Gwanghaegun (0.0454) who is known as a tyrant has similar value mean distance from other kings (0.0434).",
"It means his ruling style is quite similar to other kings, and this result supports previous results in Korean historical study (Kye, 2008 ) that re-evaluate his reputation.",
"We investigate the difference of king's ruling style based on the topic.",
"We run multinomial test (Read and Cressie, 1988 ) between king's overall ruling style distribution and specific distribution given a topic.",
"Results show that some ruling styles given a topic are different significant from overall (p < 0.01).",
"It means that the king's ruling style when the topic is given is different from his usual style.",
"Table 3 shows examples of topic.",
"Figure 3 shows four kings' overall ruling style and specific one given a topic.",
"Comparing with the leftmost bars which is overall ruling style of the king, each ruling style given a topic is different from it.",
"And, we can see that kings show similar/different ruling style for a topic.",
"For example, kings tend to discuss and order (DO) to officials for retirement and remission topics.",
"And, Sejong the Great and Gwanghaegun discuss and follow (DF) officials' words for agricultural topic.",
"But, for grants topic, Yeonsangun and Gwanghaegun act more arbitrarily (AD) than overall ruling style, and Sejong the Great follows more official's opinions (DF).",
"Conclusion and Future Work We introduced long and large historical documents, The Annals of the Joseon Dynasty.",
"It contains lots of topics such as political, economic, social and cultural over 500 years.",
"We looked at the ruling style of kings in Joseon dynasty and its difference by topics by computational methods.",
"This is ongoing work, and we are looking to find more hidden structures in the AJD.",
"Currently, historians evaluate the king's reputations (Park, 2004; Lee, 2010) .",
"This evaluation is done by many aspects, but one of the important feature is king's ruling style (Kim, 2008) .",
"So we are looking to improve methods for analyzing ruling style more specifically.",
"For example, we will look at the relationship with officials, especially who can make the king follows his opinion.",
"This approach can be used to measure king's leadership."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"3.3",
"4"
],
"paper_header_content": [
"Introduction",
"The Annals of the Joseon Dynasty",
"Categorizing ruling style",
"Method",
"Results and Discussions",
"Conclusion and Future Work"
]
} | GEM-SciDuet-train-26#paper-1027#slide-5 | The Annals of the Joseon Dynasty | National Institute of Korean History (http://www.history.go.kr)
Translated it to modern Korean
Category (political, economic, social and cultural)
Entity (person, location, nation)
Published on the web | National Institute of Korean History (http://www.history.go.kr)
Translated it to modern Korean
Category (political, economic, social and cultural)
Entity (person, location, nation)
Published on the web | [] |
GEM-SciDuet-train-26#paper-1027#slide-6 | 1027 | Five Centuries of Monarchy in Korea: Mining the Text of the Annals of the Joseon Dynasty | We present a quantitative study of the Annals of the Joseon Dynasty, the daily written records of the five hundred years of a monarchy in Korea. We first introduce the corpus, which is a series of books describing the historical facts during the Joseon dynasty. We then define three categories of the monarchial ruling styles based on the written records and compare the twentyfive kings in the monarchy. Finally, we investigate how kings show different ruling styles for various topics within the corpus. Through this study, we introduce a very unique corpus of monarchial records that span an entire monarchy of five hundred years and illustrate how text mining can be applied to answer important historical questions. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84
],
"paper_content_text": [
"Introduction Historical documents are usually studied qualitatively by researchers focusing on a close reading of a small number of documents.",
"However, for a large corpus of historical texts, qualitative methods have limitations, thus quantitative approaches have been introduced recently (Moretti, 2005; Jockers, 2013) .",
"There is also research in applying text mining and natural language processing methods to identify patterns in a corpus of large and longitudinal documents (Mimno, 2012) .",
"In this paper, we introduce a unique corpus of historical documents from the written records that span almost five hundred years from the fourteenth century up to the late nineteenth century within the Korean peninsula.",
"We apply text mining to this corpus to show the power of a computational approach in answering historical questions.",
"We first introduce The Annals of the Joseon Dynasty (Chunchugwan, 1863) .",
"Joseon is the last monarchial nation in the Korean Peninsula from its founding in 1392 up to 1910.",
"The Annals of the Joseon Dynasty are a series of books of historical facts, recorded almost daily during the Joseon dynasty.",
"Whenever a king abdicated the throne, the Chunchugwan (office for annals compilation) updated the Annals for that king from all related official and unofficial documents.",
"The Annals contain political, economic, social and cultural topics during the corresponding time periods.",
"To illustrate the application of a text mining approach, we analyze each king's ruling style from the Annals of the Joseon dynasty.",
"Being a monarchial system, almost all decisions within the government are confirmed by the king, where the king can make the decision on his own, or after discussing it with the government officials.",
"We identify the patterns of each king's decision making and compare the patterns among the kings.",
"The results show interesting patterns of the kings' ruling styles, including the tendency to make arbitrary decisions of the kings who were later dethroned because of tyranny.",
"Additionally, we apply a topic model to the corpus and analyze the kings' ruling style for each topic.",
"The Annals of the Joseon Dynasty In this section, we describe the details of The Annals of the Joseon Dynasty (from here referred to as the AJD) (Chunchugwan, 1863) and our process for building a corpus of the AJD.",
"In its entirety, the AJD consists of records from twentyseven kings over 519 years.",
"However, the last two kings' (Gojong, Sunjong) books are usually excluded from research by historians because many facts are distorted.",
"We follow that convention and use the books of the first twenty-five kings.",
"These records, in their original Chinese text and in the Korean translations, are available publicly through Joseon was a monarchy, but a king could not make all decisions by himself.",
"Instead, Joseon adopted a government system that most of the public issues are discussed with the government officials (Park, 1983; Kim, 2008) before the king made the decisions, which are all recorded in the AJD.",
"Hence, by analyzing the decision making process in the AJD, we can understand each king's ruling style.",
"Categorizing ruling style In Joseon dynasty, the king was the final decision maker.",
"Even when the government officials discussed the public issues, a king's approval was needed.",
"We can categorize each king's decision making process into three types.",
"First, a king can order directly without discussion, which we call Arbitrary Decision (AD).",
"Second, a king can discuss an issue with the officials and then direct his order, which we call Discussion and Order (DO).",
"Third, a king can discuss an issue with the officials and then decide to follow the officials' suggestion, which we call Discussion and Follow (DF).",
"The difference between DO and DF is that in DO, the king acts aggressively with his own opinion.",
"From these observations, we ask two research questions: 1) Can we identify and categorize kings with different ruling styles?",
"2) Do kings' ruling styles differ depending on the topic?",
"Method To understand each king's ruling style, we first identify relevant articles that contain the king's decision making because many of the articles describe non-governmental affairs, such as the weather, or simple status reports.",
"These relevant articles contain direct quotations of the words of the king or the government official.",
"The original texts do not contain any quotation marks, but translators added them to distinguish explicit quotations, which we can use to identify these relevant articles.",
"Its size is 126K, 36% over all articles.",
"Each article contains who said what for an issue, and king's final actions are written mostly in the last part of the article.",
"For example, the underlined last sentence in Figure 1b says that the king followed the official's suggestion.",
"Hence, to identify king's action for each issue, we focus on the last sentence in each article.",
"First, we identify that the setence subject is the king, because some issues are dealt by others.",
"For example, Sunjo, Heonjong and Cheoljong's mother or grandmother ruled as regent, so her decisions are recorded in the AJD.",
"To identify the part of speech in Korean, we used HanNanum (Choi et al., 2012) .",
"And, we investigate the verbs that indicating decisions including order, follow, approval and reject.",
"We use sixty verbs that describe king's decision specifically, and table 2 shows example words.",
"Finally, we classify these decisions into three types: 1) the king orders without discussions with the officials, and we label them as AD, 2) the king orders, approves, or rejects verbs in which their original Chinese characters show active decision making by the king, and we label them as DO, and 3) the king follows or discusses verbs which show passive submission by the king, and we label them as DF.",
"To identify topics, we use a Bayesian topic model, LDA (Blei et al., 2003) .",
"We implement it using Gibbs sampling (Griffiths and Steyvers, 2004) , set 300 topics, and optimized hyperparameters after 100 iterations (Asuncion et al., 2009) .",
"We remove stopwords and words with document frequency of 30 or smaller.",
"Results and Discussions We investigate the difference of ruling style between kings.",
"We run multinomial test (Read and Cressie, 1988 ) between king's ruling style distributions.",
"Result shows that almost all kings are different significant from others (p < 0.001).",
"It means that each king has his own ruling style.",
"Figure 2 shows the distribution of each category of ruling style.",
"Overall, many kings do not act arbitrary.",
"They discuss about many of the national affairs with officials.",
"But, Taejo who is the founder of the Joseon dynasty shows high value of AD.",
"And Yeonsangun and Gwanghaegun who are evaluated as a tyrant also show high value of it.",
"So we can imagine that tyrants tend to act arbitrarily.",
"We also identified those kings whose ruling style differed most from other kings.",
"We use JS divergence which is the symmetric measure of the difference between two probability distributions.",
"We compute JS divergence with each king pair's ruling style distributions.",
"Result shows that Heonjong (0.1220) and Yeonsangun (0.0998) have highest distance value.",
"It means their ruling style are quite different from other kings.",
"Because Heonjong's grandmother governed the Joseon each year, so his actions are quite few.",
"But, unlike Yeonsangun, Gwanghaegun (0.0454) who is known as a tyrant has similar value mean distance from other kings (0.0434).",
"It means his ruling style is quite similar to other kings, and this result supports previous results in Korean historical study (Kye, 2008 ) that re-evaluate his reputation.",
"We investigate the difference of king's ruling style based on the topic.",
"We run multinomial test (Read and Cressie, 1988 ) between king's overall ruling style distribution and specific distribution given a topic.",
"Results show that some ruling styles given a topic are different significant from overall (p < 0.01).",
"It means that the king's ruling style when the topic is given is different from his usual style.",
"Table 3 shows examples of topic.",
"Figure 3 shows four kings' overall ruling style and specific one given a topic.",
"Comparing with the leftmost bars which is overall ruling style of the king, each ruling style given a topic is different from it.",
"And, we can see that kings show similar/different ruling style for a topic.",
"For example, kings tend to discuss and order (DO) to officials for retirement and remission topics.",
"And, Sejong the Great and Gwanghaegun discuss and follow (DF) officials' words for agricultural topic.",
"But, for grants topic, Yeonsangun and Gwanghaegun act more arbitrarily (AD) than overall ruling style, and Sejong the Great follows more official's opinions (DF).",
"Conclusion and Future Work We introduced long and large historical documents, The Annals of the Joseon Dynasty.",
"It contains lots of topics such as political, economic, social and cultural over 500 years.",
"We looked at the ruling style of kings in Joseon dynasty and its difference by topics by computational methods.",
"This is ongoing work, and we are looking to find more hidden structures in the AJD.",
"Currently, historians evaluate the king's reputations (Park, 2004; Lee, 2010) .",
"This evaluation is done by many aspects, but one of the important feature is king's ruling style (Kim, 2008) .",
"So we are looking to improve methods for analyzing ruling style more specifically.",
"For example, we will look at the relationship with officials, especially who can make the king follows his opinion.",
"This approach can be used to measure king's leadership."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"3.3",
"4"
],
"paper_header_content": [
"Introduction",
"The Annals of the Joseon Dynasty",
"Categorizing ruling style",
"Method",
"Results and Discussions",
"Conclusion and Future Work"
]
} | GEM-SciDuet-train-26#paper-1027#slide-6 | The Joseon Dynasty | King governs the nation
King decides on official issues
King discusses it with government officials King
A screenshot of a historical drama - Yi san | King governs the nation
King decides on official issues
King discusses it with government officials King
A screenshot of a historical drama - Yi san | [] |
GEM-SciDuet-train-26#paper-1027#slide-7 | 1027 | Five Centuries of Monarchy in Korea: Mining the Text of the Annals of the Joseon Dynasty | We present a quantitative study of the Annals of the Joseon Dynasty, the daily written records of the five hundred years of a monarchy in Korea. We first introduce the corpus, which is a series of books describing the historical facts during the Joseon dynasty. We then define three categories of the monarchial ruling styles based on the written records and compare the twentyfive kings in the monarchy. Finally, we investigate how kings show different ruling styles for various topics within the corpus. Through this study, we introduce a very unique corpus of monarchial records that span an entire monarchy of five hundred years and illustrate how text mining can be applied to answer important historical questions. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84
],
"paper_content_text": [
"Introduction Historical documents are usually studied qualitatively by researchers focusing on a close reading of a small number of documents.",
"However, for a large corpus of historical texts, qualitative methods have limitations, thus quantitative approaches have been introduced recently (Moretti, 2005; Jockers, 2013) .",
"There is also research in applying text mining and natural language processing methods to identify patterns in a corpus of large and longitudinal documents (Mimno, 2012) .",
"In this paper, we introduce a unique corpus of historical documents from the written records that span almost five hundred years from the fourteenth century up to the late nineteenth century within the Korean peninsula.",
"We apply text mining to this corpus to show the power of a computational approach in answering historical questions.",
"We first introduce The Annals of the Joseon Dynasty (Chunchugwan, 1863) .",
"Joseon is the last monarchial nation in the Korean Peninsula from its founding in 1392 up to 1910.",
"The Annals of the Joseon Dynasty are a series of books of historical facts, recorded almost daily during the Joseon dynasty.",
"Whenever a king abdicated the throne, the Chunchugwan (office for annals compilation) updated the Annals for that king from all related official and unofficial documents.",
"The Annals contain political, economic, social and cultural topics during the corresponding time periods.",
"To illustrate the application of a text mining approach, we analyze each king's ruling style from the Annals of the Joseon dynasty.",
"Being a monarchial system, almost all decisions within the government are confirmed by the king, where the king can make the decision on his own, or after discussing it with the government officials.",
"We identify the patterns of each king's decision making and compare the patterns among the kings.",
"The results show interesting patterns of the kings' ruling styles, including the tendency to make arbitrary decisions of the kings who were later dethroned because of tyranny.",
"Additionally, we apply a topic model to the corpus and analyze the kings' ruling style for each topic.",
"The Annals of the Joseon Dynasty In this section, we describe the details of The Annals of the Joseon Dynasty (from here referred to as the AJD) (Chunchugwan, 1863) and our process for building a corpus of the AJD.",
"In its entirety, the AJD consists of records from twentyseven kings over 519 years.",
"However, the last two kings' (Gojong, Sunjong) books are usually excluded from research by historians because many facts are distorted.",
"We follow that convention and use the books of the first twenty-five kings.",
"These records, in their original Chinese text and in the Korean translations, are available publicly through Joseon was a monarchy, but a king could not make all decisions by himself.",
"Instead, Joseon adopted a government system that most of the public issues are discussed with the government officials (Park, 1983; Kim, 2008) before the king made the decisions, which are all recorded in the AJD.",
"Hence, by analyzing the decision making process in the AJD, we can understand each king's ruling style.",
"Categorizing ruling style In Joseon dynasty, the king was the final decision maker.",
"Even when the government officials discussed the public issues, a king's approval was needed.",
"We can categorize each king's decision making process into three types.",
"First, a king can order directly without discussion, which we call Arbitrary Decision (AD).",
"Second, a king can discuss an issue with the officials and then direct his order, which we call Discussion and Order (DO).",
"Third, a king can discuss an issue with the officials and then decide to follow the officials' suggestion, which we call Discussion and Follow (DF).",
"The difference between DO and DF is that in DO, the king acts aggressively with his own opinion.",
"From these observations, we ask two research questions: 1) Can we identify and categorize kings with different ruling styles?",
"2) Do kings' ruling styles differ depending on the topic?",
"Method To understand each king's ruling style, we first identify relevant articles that contain the king's decision making because many of the articles describe non-governmental affairs, such as the weather, or simple status reports.",
"These relevant articles contain direct quotations of the words of the king or the government official.",
"The original texts do not contain any quotation marks, but translators added them to distinguish explicit quotations, which we can use to identify these relevant articles.",
"Its size is 126K, 36% over all articles.",
"Each article contains who said what for an issue, and king's final actions are written mostly in the last part of the article.",
"For example, the underlined last sentence in Figure 1b says that the king followed the official's suggestion.",
"Hence, to identify king's action for each issue, we focus on the last sentence in each article.",
"First, we identify that the setence subject is the king, because some issues are dealt by others.",
"For example, Sunjo, Heonjong and Cheoljong's mother or grandmother ruled as regent, so her decisions are recorded in the AJD.",
"To identify the part of speech in Korean, we used HanNanum (Choi et al., 2012) .",
"And, we investigate the verbs that indicating decisions including order, follow, approval and reject.",
"We use sixty verbs that describe king's decision specifically, and table 2 shows example words.",
"Finally, we classify these decisions into three types: 1) the king orders without discussions with the officials, and we label them as AD, 2) the king orders, approves, or rejects verbs in which their original Chinese characters show active decision making by the king, and we label them as DO, and 3) the king follows or discusses verbs which show passive submission by the king, and we label them as DF.",
"To identify topics, we use a Bayesian topic model, LDA (Blei et al., 2003) .",
"We implement it using Gibbs sampling (Griffiths and Steyvers, 2004) , set 300 topics, and optimized hyperparameters after 100 iterations (Asuncion et al., 2009) .",
"We remove stopwords and words with document frequency of 30 or smaller.",
"Results and Discussions We investigate the difference of ruling style between kings.",
"We run multinomial test (Read and Cressie, 1988 ) between king's ruling style distributions.",
"Result shows that almost all kings are different significant from others (p < 0.001).",
"It means that each king has his own ruling style.",
"Figure 2 shows the distribution of each category of ruling style.",
"Overall, many kings do not act arbitrary.",
"They discuss about many of the national affairs with officials.",
"But, Taejo who is the founder of the Joseon dynasty shows high value of AD.",
"And Yeonsangun and Gwanghaegun who are evaluated as a tyrant also show high value of it.",
"So we can imagine that tyrants tend to act arbitrarily.",
"We also identified those kings whose ruling style differed most from other kings.",
"We use JS divergence which is the symmetric measure of the difference between two probability distributions.",
"We compute JS divergence with each king pair's ruling style distributions.",
"Result shows that Heonjong (0.1220) and Yeonsangun (0.0998) have highest distance value.",
"It means their ruling style are quite different from other kings.",
"Because Heonjong's grandmother governed the Joseon each year, so his actions are quite few.",
"But, unlike Yeonsangun, Gwanghaegun (0.0454) who is known as a tyrant has similar value mean distance from other kings (0.0434).",
"It means his ruling style is quite similar to other kings, and this result supports previous results in Korean historical study (Kye, 2008 ) that re-evaluate his reputation.",
"We investigate the difference of king's ruling style based on the topic.",
"We run multinomial test (Read and Cressie, 1988 ) between king's overall ruling style distribution and specific distribution given a topic.",
"Results show that some ruling styles given a topic are different significant from overall (p < 0.01).",
"It means that the king's ruling style when the topic is given is different from his usual style.",
"Table 3 shows examples of topic.",
"Figure 3 shows four kings' overall ruling style and specific one given a topic.",
"Comparing with the leftmost bars which is overall ruling style of the king, each ruling style given a topic is different from it.",
"And, we can see that kings show similar/different ruling style for a topic.",
"For example, kings tend to discuss and order (DO) to officials for retirement and remission topics.",
"And, Sejong the Great and Gwanghaegun discuss and follow (DF) officials' words for agricultural topic.",
"But, for grants topic, Yeonsangun and Gwanghaegun act more arbitrarily (AD) than overall ruling style, and Sejong the Great follows more official's opinions (DF).",
"Conclusion and Future Work We introduced long and large historical documents, The Annals of the Joseon Dynasty.",
"It contains lots of topics such as political, economic, social and cultural over 500 years.",
"We looked at the ruling style of kings in Joseon dynasty and its difference by topics by computational methods.",
"This is ongoing work, and we are looking to find more hidden structures in the AJD.",
"Currently, historians evaluate the king's reputations (Park, 2004; Lee, 2010) .",
"This evaluation is done by many aspects, but one of the important feature is king's ruling style (Kim, 2008) .",
"So we are looking to improve methods for analyzing ruling style more specifically.",
"For example, we will look at the relationship with officials, especially who can make the king follows his opinion.",
"This approach can be used to measure king's leadership."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"3.3",
"4"
],
"paper_header_content": [
"Introduction",
"The Annals of the Joseon Dynasty",
"Categorizing ruling style",
"Method",
"Results and Discussions",
"Conclusion and Future Work"
]
} | GEM-SciDuet-train-26#paper-1027#slide-7 | Methodology | Look at the kings words and decisions
To avoid non-governmental affairs (e.g. observations)
Identify kings final decisions in the article
Build sixty candidate verbs
Look at the verbs in kings last sentence and title
Discover topics in each article
LDA outputs a topic proportion for each article
LDA outputs a multinomial word distribution for each topic
Identify who said what
To analyze the participants in the discussion
Look at subjects and person tags in front of the sentence of each quote | Look at the kings words and decisions
To avoid non-governmental affairs (e.g. observations)
Identify kings final decisions in the article
Build sixty candidate verbs
Look at the verbs in kings last sentence and title
Discover topics in each article
LDA outputs a topic proportion for each article
LDA outputs a multinomial word distribution for each topic
Identify who said what
To analyze the participants in the discussion
Look at subjects and person tags in front of the sentence of each quote | [] |
GEM-SciDuet-train-26#paper-1027#slide-8 | 1027 | Five Centuries of Monarchy in Korea: Mining the Text of the Annals of the Joseon Dynasty | We present a quantitative study of the Annals of the Joseon Dynasty, the daily written records of the five hundred years of a monarchy in Korea. We first introduce the corpus, which is a series of books describing the historical facts during the Joseon dynasty. We then define three categories of the monarchial ruling styles based on the written records and compare the twentyfive kings in the monarchy. Finally, we investigate how kings show different ruling styles for various topics within the corpus. Through this study, we introduce a very unique corpus of monarchial records that span an entire monarchy of five hundred years and illustrate how text mining can be applied to answer important historical questions. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84
],
"paper_content_text": [
"Introduction Historical documents are usually studied qualitatively by researchers focusing on a close reading of a small number of documents.",
"However, for a large corpus of historical texts, qualitative methods have limitations, thus quantitative approaches have been introduced recently (Moretti, 2005; Jockers, 2013) .",
"There is also research in applying text mining and natural language processing methods to identify patterns in a corpus of large and longitudinal documents (Mimno, 2012) .",
"In this paper, we introduce a unique corpus of historical documents from the written records that span almost five hundred years from the fourteenth century up to the late nineteenth century within the Korean peninsula.",
"We apply text mining to this corpus to show the power of a computational approach in answering historical questions.",
"We first introduce The Annals of the Joseon Dynasty (Chunchugwan, 1863) .",
"Joseon is the last monarchial nation in the Korean Peninsula from its founding in 1392 up to 1910.",
"The Annals of the Joseon Dynasty are a series of books of historical facts, recorded almost daily during the Joseon dynasty.",
"Whenever a king abdicated the throne, the Chunchugwan (office for annals compilation) updated the Annals for that king from all related official and unofficial documents.",
"The Annals contain political, economic, social and cultural topics during the corresponding time periods.",
"To illustrate the application of a text mining approach, we analyze each king's ruling style from the Annals of the Joseon dynasty.",
"Being a monarchial system, almost all decisions within the government are confirmed by the king, where the king can make the decision on his own, or after discussing it with the government officials.",
"We identify the patterns of each king's decision making and compare the patterns among the kings.",
"The results show interesting patterns of the kings' ruling styles, including the tendency to make arbitrary decisions of the kings who were later dethroned because of tyranny.",
"Additionally, we apply a topic model to the corpus and analyze the kings' ruling style for each topic.",
"The Annals of the Joseon Dynasty In this section, we describe the details of The Annals of the Joseon Dynasty (from here referred to as the AJD) (Chunchugwan, 1863) and our process for building a corpus of the AJD.",
"In its entirety, the AJD consists of records from twentyseven kings over 519 years.",
"However, the last two kings' (Gojong, Sunjong) books are usually excluded from research by historians because many facts are distorted.",
"We follow that convention and use the books of the first twenty-five kings.",
"These records, in their original Chinese text and in the Korean translations, are available publicly through Joseon was a monarchy, but a king could not make all decisions by himself.",
"Instead, Joseon adopted a government system that most of the public issues are discussed with the government officials (Park, 1983; Kim, 2008) before the king made the decisions, which are all recorded in the AJD.",
"Hence, by analyzing the decision making process in the AJD, we can understand each king's ruling style.",
"Categorizing ruling style In Joseon dynasty, the king was the final decision maker.",
"Even when the government officials discussed the public issues, a king's approval was needed.",
"We can categorize each king's decision making process into three types.",
"First, a king can order directly without discussion, which we call Arbitrary Decision (AD).",
"Second, a king can discuss an issue with the officials and then direct his order, which we call Discussion and Order (DO).",
"Third, a king can discuss an issue with the officials and then decide to follow the officials' suggestion, which we call Discussion and Follow (DF).",
"The difference between DO and DF is that in DO, the king acts aggressively with his own opinion.",
"From these observations, we ask two research questions: 1) Can we identify and categorize kings with different ruling styles?",
"2) Do kings' ruling styles differ depending on the topic?",
"Method To understand each king's ruling style, we first identify relevant articles that contain the king's decision making because many of the articles describe non-governmental affairs, such as the weather, or simple status reports.",
"These relevant articles contain direct quotations of the words of the king or the government official.",
"The original texts do not contain any quotation marks, but translators added them to distinguish explicit quotations, which we can use to identify these relevant articles.",
"Its size is 126K, 36% over all articles.",
"Each article contains who said what for an issue, and king's final actions are written mostly in the last part of the article.",
"For example, the underlined last sentence in Figure 1b says that the king followed the official's suggestion.",
"Hence, to identify king's action for each issue, we focus on the last sentence in each article.",
"First, we identify that the setence subject is the king, because some issues are dealt by others.",
"For example, Sunjo, Heonjong and Cheoljong's mother or grandmother ruled as regent, so her decisions are recorded in the AJD.",
"To identify the part of speech in Korean, we used HanNanum (Choi et al., 2012) .",
"And, we investigate the verbs that indicating decisions including order, follow, approval and reject.",
"We use sixty verbs that describe king's decision specifically, and table 2 shows example words.",
"Finally, we classify these decisions into three types: 1) the king orders without discussions with the officials, and we label them as AD, 2) the king orders, approves, or rejects verbs in which their original Chinese characters show active decision making by the king, and we label them as DO, and 3) the king follows or discusses verbs which show passive submission by the king, and we label them as DF.",
"To identify topics, we use a Bayesian topic model, LDA (Blei et al., 2003) .",
"We implement it using Gibbs sampling (Griffiths and Steyvers, 2004) , set 300 topics, and optimized hyperparameters after 100 iterations (Asuncion et al., 2009) .",
"We remove stopwords and words with document frequency of 30 or smaller.",
"Results and Discussions We investigate the difference of ruling style between kings.",
"We run multinomial test (Read and Cressie, 1988 ) between king's ruling style distributions.",
"Result shows that almost all kings are different significant from others (p < 0.001).",
"It means that each king has his own ruling style.",
"Figure 2 shows the distribution of each category of ruling style.",
"Overall, many kings do not act arbitrary.",
"They discuss about many of the national affairs with officials.",
"But, Taejo who is the founder of the Joseon dynasty shows high value of AD.",
"And Yeonsangun and Gwanghaegun who are evaluated as a tyrant also show high value of it.",
"So we can imagine that tyrants tend to act arbitrarily.",
"We also identified those kings whose ruling style differed most from other kings.",
"We use JS divergence which is the symmetric measure of the difference between two probability distributions.",
"We compute JS divergence with each king pair's ruling style distributions.",
"Result shows that Heonjong (0.1220) and Yeonsangun (0.0998) have highest distance value.",
"It means their ruling style are quite different from other kings.",
"Because Heonjong's grandmother governed the Joseon each year, so his actions are quite few.",
"But, unlike Yeonsangun, Gwanghaegun (0.0454) who is known as a tyrant has similar value mean distance from other kings (0.0434).",
"It means his ruling style is quite similar to other kings, and this result supports previous results in Korean historical study (Kye, 2008 ) that re-evaluate his reputation.",
"We investigate the difference of king's ruling style based on the topic.",
"We run multinomial test (Read and Cressie, 1988 ) between king's overall ruling style distribution and specific distribution given a topic.",
"Results show that some ruling styles given a topic are different significant from overall (p < 0.01).",
"It means that the king's ruling style when the topic is given is different from his usual style.",
"Table 3 shows examples of topic.",
"Figure 3 shows four kings' overall ruling style and specific one given a topic.",
"Comparing with the leftmost bars which is overall ruling style of the king, each ruling style given a topic is different from it.",
"And, we can see that kings show similar/different ruling style for a topic.",
"For example, kings tend to discuss and order (DO) to officials for retirement and remission topics.",
"And, Sejong the Great and Gwanghaegun discuss and follow (DF) officials' words for agricultural topic.",
"But, for grants topic, Yeonsangun and Gwanghaegun act more arbitrarily (AD) than overall ruling style, and Sejong the Great follows more official's opinions (DF).",
"Conclusion and Future Work We introduced long and large historical documents, The Annals of the Joseon Dynasty.",
"It contains lots of topics such as political, economic, social and cultural over 500 years.",
"We looked at the ruling style of kings in Joseon dynasty and its difference by topics by computational methods.",
"This is ongoing work, and we are looking to find more hidden structures in the AJD.",
"Currently, historians evaluate the king's reputations (Park, 2004; Lee, 2010) .",
"This evaluation is done by many aspects, but one of the important feature is king's ruling style (Kim, 2008) .",
"So we are looking to improve methods for analyzing ruling style more specifically.",
"For example, we will look at the relationship with officials, especially who can make the king follows his opinion.",
"This approach can be used to measure king's leadership."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"3.3",
"4"
],
"paper_header_content": [
"Introduction",
"The Annals of the Joseon Dynasty",
"Categorizing ruling style",
"Method",
"Results and Discussions",
"Conclusion and Future Work"
]
} | GEM-SciDuet-train-26#paper-1027#slide-8 | Ruling styles | Discussion and Order (DO) example
Discussion and Follow (DF) example
No discussion with officials
Orders, approves, or rejects at the end
Arbitrary Decision (AD) example | Discussion and Order (DO) example
Discussion and Follow (DF) example
No discussion with officials
Orders, approves, or rejects at the end
Arbitrary Decision (AD) example | [] |
GEM-SciDuet-train-26#paper-1027#slide-9 | 1027 | Five Centuries of Monarchy in Korea: Mining the Text of the Annals of the Joseon Dynasty | We present a quantitative study of the Annals of the Joseon Dynasty, the daily written records of the five hundred years of a monarchy in Korea. We first introduce the corpus, which is a series of books describing the historical facts during the Joseon dynasty. We then define three categories of the monarchial ruling styles based on the written records and compare the twentyfive kings in the monarchy. Finally, we investigate how kings show different ruling styles for various topics within the corpus. Through this study, we introduce a very unique corpus of monarchial records that span an entire monarchy of five hundred years and illustrate how text mining can be applied to answer important historical questions. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84
],
"paper_content_text": [
"Introduction Historical documents are usually studied qualitatively by researchers focusing on a close reading of a small number of documents.",
"However, for a large corpus of historical texts, qualitative methods have limitations, thus quantitative approaches have been introduced recently (Moretti, 2005; Jockers, 2013) .",
"There is also research in applying text mining and natural language processing methods to identify patterns in a corpus of large and longitudinal documents (Mimno, 2012) .",
"In this paper, we introduce a unique corpus of historical documents from the written records that span almost five hundred years from the fourteenth century up to the late nineteenth century within the Korean peninsula.",
"We apply text mining to this corpus to show the power of a computational approach in answering historical questions.",
"We first introduce The Annals of the Joseon Dynasty (Chunchugwan, 1863) .",
"Joseon is the last monarchial nation in the Korean Peninsula from its founding in 1392 up to 1910.",
"The Annals of the Joseon Dynasty are a series of books of historical facts, recorded almost daily during the Joseon dynasty.",
"Whenever a king abdicated the throne, the Chunchugwan (office for annals compilation) updated the Annals for that king from all related official and unofficial documents.",
"The Annals contain political, economic, social and cultural topics during the corresponding time periods.",
"To illustrate the application of a text mining approach, we analyze each king's ruling style from the Annals of the Joseon dynasty.",
"Being a monarchial system, almost all decisions within the government are confirmed by the king, where the king can make the decision on his own, or after discussing it with the government officials.",
"We identify the patterns of each king's decision making and compare the patterns among the kings.",
"The results show interesting patterns of the kings' ruling styles, including the tendency to make arbitrary decisions of the kings who were later dethroned because of tyranny.",
"Additionally, we apply a topic model to the corpus and analyze the kings' ruling style for each topic.",
"The Annals of the Joseon Dynasty In this section, we describe the details of The Annals of the Joseon Dynasty (from here referred to as the AJD) (Chunchugwan, 1863) and our process for building a corpus of the AJD.",
"In its entirety, the AJD consists of records from twentyseven kings over 519 years.",
"However, the last two kings' (Gojong, Sunjong) books are usually excluded from research by historians because many facts are distorted.",
"We follow that convention and use the books of the first twenty-five kings.",
"These records, in their original Chinese text and in the Korean translations, are available publicly through Joseon was a monarchy, but a king could not make all decisions by himself.",
"Instead, Joseon adopted a government system that most of the public issues are discussed with the government officials (Park, 1983; Kim, 2008) before the king made the decisions, which are all recorded in the AJD.",
"Hence, by analyzing the decision making process in the AJD, we can understand each king's ruling style.",
"Categorizing ruling style In Joseon dynasty, the king was the final decision maker.",
"Even when the government officials discussed the public issues, a king's approval was needed.",
"We can categorize each king's decision making process into three types.",
"First, a king can order directly without discussion, which we call Arbitrary Decision (AD).",
"Second, a king can discuss an issue with the officials and then direct his order, which we call Discussion and Order (DO).",
"Third, a king can discuss an issue with the officials and then decide to follow the officials' suggestion, which we call Discussion and Follow (DF).",
"The difference between DO and DF is that in DO, the king acts aggressively with his own opinion.",
"From these observations, we ask two research questions: 1) Can we identify and categorize kings with different ruling styles?",
"2) Do kings' ruling styles differ depending on the topic?",
"Method To understand each king's ruling style, we first identify relevant articles that contain the king's decision making because many of the articles describe non-governmental affairs, such as the weather, or simple status reports.",
"These relevant articles contain direct quotations of the words of the king or the government official.",
"The original texts do not contain any quotation marks, but translators added them to distinguish explicit quotations, which we can use to identify these relevant articles.",
"Its size is 126K, 36% over all articles.",
"Each article contains who said what for an issue, and king's final actions are written mostly in the last part of the article.",
"For example, the underlined last sentence in Figure 1b says that the king followed the official's suggestion.",
"Hence, to identify king's action for each issue, we focus on the last sentence in each article.",
"First, we identify that the setence subject is the king, because some issues are dealt by others.",
"For example, Sunjo, Heonjong and Cheoljong's mother or grandmother ruled as regent, so her decisions are recorded in the AJD.",
"To identify the part of speech in Korean, we used HanNanum (Choi et al., 2012) .",
"And, we investigate the verbs that indicating decisions including order, follow, approval and reject.",
"We use sixty verbs that describe king's decision specifically, and table 2 shows example words.",
"Finally, we classify these decisions into three types: 1) the king orders without discussions with the officials, and we label them as AD, 2) the king orders, approves, or rejects verbs in which their original Chinese characters show active decision making by the king, and we label them as DO, and 3) the king follows or discusses verbs which show passive submission by the king, and we label them as DF.",
"To identify topics, we use a Bayesian topic model, LDA (Blei et al., 2003) .",
"We implement it using Gibbs sampling (Griffiths and Steyvers, 2004) , set 300 topics, and optimized hyperparameters after 100 iterations (Asuncion et al., 2009) .",
"We remove stopwords and words with document frequency of 30 or smaller.",
"Results and Discussions We investigate the difference of ruling style between kings.",
"We run multinomial test (Read and Cressie, 1988 ) between king's ruling style distributions.",
"Result shows that almost all kings are different significant from others (p < 0.001).",
"It means that each king has his own ruling style.",
"Figure 2 shows the distribution of each category of ruling style.",
"Overall, many kings do not act arbitrary.",
"They discuss about many of the national affairs with officials.",
"But, Taejo who is the founder of the Joseon dynasty shows high value of AD.",
"And Yeonsangun and Gwanghaegun who are evaluated as a tyrant also show high value of it.",
"So we can imagine that tyrants tend to act arbitrarily.",
"We also identified those kings whose ruling style differed most from other kings.",
"We use JS divergence which is the symmetric measure of the difference between two probability distributions.",
"We compute JS divergence with each king pair's ruling style distributions.",
"Result shows that Heonjong (0.1220) and Yeonsangun (0.0998) have highest distance value.",
"It means their ruling style are quite different from other kings.",
"Because Heonjong's grandmother governed the Joseon each year, so his actions are quite few.",
"But, unlike Yeonsangun, Gwanghaegun (0.0454) who is known as a tyrant has similar value mean distance from other kings (0.0434).",
"It means his ruling style is quite similar to other kings, and this result supports previous results in Korean historical study (Kye, 2008 ) that re-evaluate his reputation.",
"We investigate the difference of king's ruling style based on the topic.",
"We run multinomial test (Read and Cressie, 1988 ) between king's overall ruling style distribution and specific distribution given a topic.",
"Results show that some ruling styles given a topic are different significant from overall (p < 0.01).",
"It means that the king's ruling style when the topic is given is different from his usual style.",
"Table 3 shows examples of topic.",
"Figure 3 shows four kings' overall ruling style and specific one given a topic.",
"Comparing with the leftmost bars which is overall ruling style of the king, each ruling style given a topic is different from it.",
"And, we can see that kings show similar/different ruling style for a topic.",
"For example, kings tend to discuss and order (DO) to officials for retirement and remission topics.",
"And, Sejong the Great and Gwanghaegun discuss and follow (DF) officials' words for agricultural topic.",
"But, for grants topic, Yeonsangun and Gwanghaegun act more arbitrarily (AD) than overall ruling style, and Sejong the Great follows more official's opinions (DF).",
"Conclusion and Future Work We introduced long and large historical documents, The Annals of the Joseon Dynasty.",
"It contains lots of topics such as political, economic, social and cultural over 500 years.",
"We looked at the ruling style of kings in Joseon dynasty and its difference by topics by computational methods.",
"This is ongoing work, and we are looking to find more hidden structures in the AJD.",
"Currently, historians evaluate the king's reputations (Park, 2004; Lee, 2010) .",
"This evaluation is done by many aspects, but one of the important feature is king's ruling style (Kim, 2008) .",
"So we are looking to improve methods for analyzing ruling style more specifically.",
"For example, we will look at the relationship with officials, especially who can make the king follows his opinion.",
"This approach can be used to measure king's leadership."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"3.3",
"4"
],
"paper_header_content": [
"Introduction",
"The Annals of the Joseon Dynasty",
"Categorizing ruling style",
"Method",
"Results and Discussions",
"Conclusion and Future Work"
]
} | GEM-SciDuet-train-26#paper-1027#slide-9 | Research Question 1 | 1. Do kings show different kinds of leadership styles? | 1. Do kings show different kinds of leadership styles? | [] |
GEM-SciDuet-train-26#paper-1027#slide-10 | 1027 | Five Centuries of Monarchy in Korea: Mining the Text of the Annals of the Joseon Dynasty | We present a quantitative study of the Annals of the Joseon Dynasty, the daily written records of the five hundred years of a monarchy in Korea. We first introduce the corpus, which is a series of books describing the historical facts during the Joseon dynasty. We then define three categories of the monarchial ruling styles based on the written records and compare the twentyfive kings in the monarchy. Finally, we investigate how kings show different ruling styles for various topics within the corpus. Through this study, we introduce a very unique corpus of monarchial records that span an entire monarchy of five hundred years and illustrate how text mining can be applied to answer important historical questions. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84
],
"paper_content_text": [
"Introduction Historical documents are usually studied qualitatively by researchers focusing on a close reading of a small number of documents.",
"However, for a large corpus of historical texts, qualitative methods have limitations, thus quantitative approaches have been introduced recently (Moretti, 2005; Jockers, 2013) .",
"There is also research in applying text mining and natural language processing methods to identify patterns in a corpus of large and longitudinal documents (Mimno, 2012) .",
"In this paper, we introduce a unique corpus of historical documents from the written records that span almost five hundred years from the fourteenth century up to the late nineteenth century within the Korean peninsula.",
"We apply text mining to this corpus to show the power of a computational approach in answering historical questions.",
"We first introduce The Annals of the Joseon Dynasty (Chunchugwan, 1863) .",
"Joseon is the last monarchial nation in the Korean Peninsula from its founding in 1392 up to 1910.",
"The Annals of the Joseon Dynasty are a series of books of historical facts, recorded almost daily during the Joseon dynasty.",
"Whenever a king abdicated the throne, the Chunchugwan (office for annals compilation) updated the Annals for that king from all related official and unofficial documents.",
"The Annals contain political, economic, social and cultural topics during the corresponding time periods.",
"To illustrate the application of a text mining approach, we analyze each king's ruling style from the Annals of the Joseon dynasty.",
"Being a monarchial system, almost all decisions within the government are confirmed by the king, where the king can make the decision on his own, or after discussing it with the government officials.",
"We identify the patterns of each king's decision making and compare the patterns among the kings.",
"The results show interesting patterns of the kings' ruling styles, including the tendency to make arbitrary decisions of the kings who were later dethroned because of tyranny.",
"Additionally, we apply a topic model to the corpus and analyze the kings' ruling style for each topic.",
"The Annals of the Joseon Dynasty In this section, we describe the details of The Annals of the Joseon Dynasty (from here referred to as the AJD) (Chunchugwan, 1863) and our process for building a corpus of the AJD.",
"In its entirety, the AJD consists of records from twentyseven kings over 519 years.",
"However, the last two kings' (Gojong, Sunjong) books are usually excluded from research by historians because many facts are distorted.",
"We follow that convention and use the books of the first twenty-five kings.",
"These records, in their original Chinese text and in the Korean translations, are available publicly through Joseon was a monarchy, but a king could not make all decisions by himself.",
"Instead, Joseon adopted a government system that most of the public issues are discussed with the government officials (Park, 1983; Kim, 2008) before the king made the decisions, which are all recorded in the AJD.",
"Hence, by analyzing the decision making process in the AJD, we can understand each king's ruling style.",
"Categorizing ruling style In Joseon dynasty, the king was the final decision maker.",
"Even when the government officials discussed the public issues, a king's approval was needed.",
"We can categorize each king's decision making process into three types.",
"First, a king can order directly without discussion, which we call Arbitrary Decision (AD).",
"Second, a king can discuss an issue with the officials and then direct his order, which we call Discussion and Order (DO).",
"Third, a king can discuss an issue with the officials and then decide to follow the officials' suggestion, which we call Discussion and Follow (DF).",
"The difference between DO and DF is that in DO, the king acts aggressively with his own opinion.",
"From these observations, we ask two research questions: 1) Can we identify and categorize kings with different ruling styles?",
"2) Do kings' ruling styles differ depending on the topic?",
"Method To understand each king's ruling style, we first identify relevant articles that contain the king's decision making because many of the articles describe non-governmental affairs, such as the weather, or simple status reports.",
"These relevant articles contain direct quotations of the words of the king or the government official.",
"The original texts do not contain any quotation marks, but translators added them to distinguish explicit quotations, which we can use to identify these relevant articles.",
"Its size is 126K, 36% over all articles.",
"Each article contains who said what for an issue, and king's final actions are written mostly in the last part of the article.",
"For example, the underlined last sentence in Figure 1b says that the king followed the official's suggestion.",
"Hence, to identify king's action for each issue, we focus on the last sentence in each article.",
"First, we identify that the setence subject is the king, because some issues are dealt by others.",
"For example, Sunjo, Heonjong and Cheoljong's mother or grandmother ruled as regent, so her decisions are recorded in the AJD.",
"To identify the part of speech in Korean, we used HanNanum (Choi et al., 2012) .",
"And, we investigate the verbs that indicating decisions including order, follow, approval and reject.",
"We use sixty verbs that describe king's decision specifically, and table 2 shows example words.",
"Finally, we classify these decisions into three types: 1) the king orders without discussions with the officials, and we label them as AD, 2) the king orders, approves, or rejects verbs in which their original Chinese characters show active decision making by the king, and we label them as DO, and 3) the king follows or discusses verbs which show passive submission by the king, and we label them as DF.",
"To identify topics, we use a Bayesian topic model, LDA (Blei et al., 2003) .",
"We implement it using Gibbs sampling (Griffiths and Steyvers, 2004) , set 300 topics, and optimized hyperparameters after 100 iterations (Asuncion et al., 2009) .",
"We remove stopwords and words with document frequency of 30 or smaller.",
"Results and Discussions We investigate the difference of ruling style between kings.",
"We run multinomial test (Read and Cressie, 1988 ) between king's ruling style distributions.",
"Result shows that almost all kings are different significant from others (p < 0.001).",
"It means that each king has his own ruling style.",
"Figure 2 shows the distribution of each category of ruling style.",
"Overall, many kings do not act arbitrary.",
"They discuss about many of the national affairs with officials.",
"But, Taejo who is the founder of the Joseon dynasty shows high value of AD.",
"And Yeonsangun and Gwanghaegun who are evaluated as a tyrant also show high value of it.",
"So we can imagine that tyrants tend to act arbitrarily.",
"We also identified those kings whose ruling style differed most from other kings.",
"We use JS divergence which is the symmetric measure of the difference between two probability distributions.",
"We compute JS divergence with each king pair's ruling style distributions.",
"Result shows that Heonjong (0.1220) and Yeonsangun (0.0998) have highest distance value.",
"It means their ruling style are quite different from other kings.",
"Because Heonjong's grandmother governed the Joseon each year, so his actions are quite few.",
"But, unlike Yeonsangun, Gwanghaegun (0.0454) who is known as a tyrant has similar value mean distance from other kings (0.0434).",
"It means his ruling style is quite similar to other kings, and this result supports previous results in Korean historical study (Kye, 2008 ) that re-evaluate his reputation.",
"We investigate the difference of king's ruling style based on the topic.",
"We run multinomial test (Read and Cressie, 1988 ) between king's overall ruling style distribution and specific distribution given a topic.",
"Results show that some ruling styles given a topic are different significant from overall (p < 0.01).",
"It means that the king's ruling style when the topic is given is different from his usual style.",
"Table 3 shows examples of topic.",
"Figure 3 shows four kings' overall ruling style and specific one given a topic.",
"Comparing with the leftmost bars which is overall ruling style of the king, each ruling style given a topic is different from it.",
"And, we can see that kings show similar/different ruling style for a topic.",
"For example, kings tend to discuss and order (DO) to officials for retirement and remission topics.",
"And, Sejong the Great and Gwanghaegun discuss and follow (DF) officials' words for agricultural topic.",
"But, for grants topic, Yeonsangun and Gwanghaegun act more arbitrarily (AD) than overall ruling style, and Sejong the Great follows more official's opinions (DF).",
"Conclusion and Future Work We introduced long and large historical documents, The Annals of the Joseon Dynasty.",
"It contains lots of topics such as political, economic, social and cultural over 500 years.",
"We looked at the ruling style of kings in Joseon dynasty and its difference by topics by computational methods.",
"This is ongoing work, and we are looking to find more hidden structures in the AJD.",
"Currently, historians evaluate the king's reputations (Park, 2004; Lee, 2010) .",
"This evaluation is done by many aspects, but one of the important feature is king's ruling style (Kim, 2008) .",
"So we are looking to improve methods for analyzing ruling style more specifically.",
"For example, we will look at the relationship with officials, especially who can make the king follows his opinion.",
"This approach can be used to measure king's leadership."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"3.3",
"4"
],
"paper_header_content": [
"Introduction",
"The Annals of the Joseon Dynasty",
"Categorizing ruling style",
"Method",
"Results and Discussions",
"Conclusion and Future Work"
]
} | GEM-SciDuet-train-26#paper-1027#slide-10 | Results Among kings | Tyrants (Yeonsangun, Gwanghaegun) show high value of AD | Tyrants (Yeonsangun, Gwanghaegun) show high value of AD | [] |
GEM-SciDuet-train-26#paper-1027#slide-11 | 1027 | Five Centuries of Monarchy in Korea: Mining the Text of the Annals of the Joseon Dynasty | We present a quantitative study of the Annals of the Joseon Dynasty, the daily written records of the five hundred years of a monarchy in Korea. We first introduce the corpus, which is a series of books describing the historical facts during the Joseon dynasty. We then define three categories of the monarchial ruling styles based on the written records and compare the twentyfive kings in the monarchy. Finally, we investigate how kings show different ruling styles for various topics within the corpus. Through this study, we introduce a very unique corpus of monarchial records that span an entire monarchy of five hundred years and illustrate how text mining can be applied to answer important historical questions. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84
],
"paper_content_text": [
"Introduction Historical documents are usually studied qualitatively by researchers focusing on a close reading of a small number of documents.",
"However, for a large corpus of historical texts, qualitative methods have limitations, thus quantitative approaches have been introduced recently (Moretti, 2005; Jockers, 2013) .",
"There is also research in applying text mining and natural language processing methods to identify patterns in a corpus of large and longitudinal documents (Mimno, 2012) .",
"In this paper, we introduce a unique corpus of historical documents from the written records that span almost five hundred years from the fourteenth century up to the late nineteenth century within the Korean peninsula.",
"We apply text mining to this corpus to show the power of a computational approach in answering historical questions.",
"We first introduce The Annals of the Joseon Dynasty (Chunchugwan, 1863) .",
"Joseon is the last monarchial nation in the Korean Peninsula from its founding in 1392 up to 1910.",
"The Annals of the Joseon Dynasty are a series of books of historical facts, recorded almost daily during the Joseon dynasty.",
"Whenever a king abdicated the throne, the Chunchugwan (office for annals compilation) updated the Annals for that king from all related official and unofficial documents.",
"The Annals contain political, economic, social and cultural topics during the corresponding time periods.",
"To illustrate the application of a text mining approach, we analyze each king's ruling style from the Annals of the Joseon dynasty.",
"Being a monarchial system, almost all decisions within the government are confirmed by the king, where the king can make the decision on his own, or after discussing it with the government officials.",
"We identify the patterns of each king's decision making and compare the patterns among the kings.",
"The results show interesting patterns of the kings' ruling styles, including the tendency to make arbitrary decisions of the kings who were later dethroned because of tyranny.",
"Additionally, we apply a topic model to the corpus and analyze the kings' ruling style for each topic.",
"The Annals of the Joseon Dynasty In this section, we describe the details of The Annals of the Joseon Dynasty (from here referred to as the AJD) (Chunchugwan, 1863) and our process for building a corpus of the AJD.",
"In its entirety, the AJD consists of records from twentyseven kings over 519 years.",
"However, the last two kings' (Gojong, Sunjong) books are usually excluded from research by historians because many facts are distorted.",
"We follow that convention and use the books of the first twenty-five kings.",
"These records, in their original Chinese text and in the Korean translations, are available publicly through Joseon was a monarchy, but a king could not make all decisions by himself.",
"Instead, Joseon adopted a government system that most of the public issues are discussed with the government officials (Park, 1983; Kim, 2008) before the king made the decisions, which are all recorded in the AJD.",
"Hence, by analyzing the decision making process in the AJD, we can understand each king's ruling style.",
"Categorizing ruling style In Joseon dynasty, the king was the final decision maker.",
"Even when the government officials discussed the public issues, a king's approval was needed.",
"We can categorize each king's decision making process into three types.",
"First, a king can order directly without discussion, which we call Arbitrary Decision (AD).",
"Second, a king can discuss an issue with the officials and then direct his order, which we call Discussion and Order (DO).",
"Third, a king can discuss an issue with the officials and then decide to follow the officials' suggestion, which we call Discussion and Follow (DF).",
"The difference between DO and DF is that in DO, the king acts aggressively with his own opinion.",
"From these observations, we ask two research questions: 1) Can we identify and categorize kings with different ruling styles?",
"2) Do kings' ruling styles differ depending on the topic?",
"Method To understand each king's ruling style, we first identify relevant articles that contain the king's decision making because many of the articles describe non-governmental affairs, such as the weather, or simple status reports.",
"These relevant articles contain direct quotations of the words of the king or the government official.",
"The original texts do not contain any quotation marks, but translators added them to distinguish explicit quotations, which we can use to identify these relevant articles.",
"Its size is 126K, 36% over all articles.",
"Each article contains who said what for an issue, and king's final actions are written mostly in the last part of the article.",
"For example, the underlined last sentence in Figure 1b says that the king followed the official's suggestion.",
"Hence, to identify king's action for each issue, we focus on the last sentence in each article.",
"First, we identify that the setence subject is the king, because some issues are dealt by others.",
"For example, Sunjo, Heonjong and Cheoljong's mother or grandmother ruled as regent, so her decisions are recorded in the AJD.",
"To identify the part of speech in Korean, we used HanNanum (Choi et al., 2012) .",
"And, we investigate the verbs that indicating decisions including order, follow, approval and reject.",
"We use sixty verbs that describe king's decision specifically, and table 2 shows example words.",
"Finally, we classify these decisions into three types: 1) the king orders without discussions with the officials, and we label them as AD, 2) the king orders, approves, or rejects verbs in which their original Chinese characters show active decision making by the king, and we label them as DO, and 3) the king follows or discusses verbs which show passive submission by the king, and we label them as DF.",
"To identify topics, we use a Bayesian topic model, LDA (Blei et al., 2003) .",
"We implement it using Gibbs sampling (Griffiths and Steyvers, 2004) , set 300 topics, and optimized hyperparameters after 100 iterations (Asuncion et al., 2009) .",
"We remove stopwords and words with document frequency of 30 or smaller.",
"Results and Discussions We investigate the difference of ruling style between kings.",
"We run multinomial test (Read and Cressie, 1988 ) between king's ruling style distributions.",
"Result shows that almost all kings are different significant from others (p < 0.001).",
"It means that each king has his own ruling style.",
"Figure 2 shows the distribution of each category of ruling style.",
"Overall, many kings do not act arbitrary.",
"They discuss about many of the national affairs with officials.",
"But, Taejo who is the founder of the Joseon dynasty shows high value of AD.",
"And Yeonsangun and Gwanghaegun who are evaluated as a tyrant also show high value of it.",
"So we can imagine that tyrants tend to act arbitrarily.",
"We also identified those kings whose ruling style differed most from other kings.",
"We use JS divergence which is the symmetric measure of the difference between two probability distributions.",
"We compute JS divergence with each king pair's ruling style distributions.",
"Result shows that Heonjong (0.1220) and Yeonsangun (0.0998) have highest distance value.",
"It means their ruling style are quite different from other kings.",
"Because Heonjong's grandmother governed the Joseon each year, so his actions are quite few.",
"But, unlike Yeonsangun, Gwanghaegun (0.0454) who is known as a tyrant has similar value mean distance from other kings (0.0434).",
"It means his ruling style is quite similar to other kings, and this result supports previous results in Korean historical study (Kye, 2008 ) that re-evaluate his reputation.",
"We investigate the difference of king's ruling style based on the topic.",
"We run multinomial test (Read and Cressie, 1988 ) between king's overall ruling style distribution and specific distribution given a topic.",
"Results show that some ruling styles given a topic are different significant from overall (p < 0.01).",
"It means that the king's ruling style when the topic is given is different from his usual style.",
"Table 3 shows examples of topic.",
"Figure 3 shows four kings' overall ruling style and specific one given a topic.",
"Comparing with the leftmost bars which is overall ruling style of the king, each ruling style given a topic is different from it.",
"And, we can see that kings show similar/different ruling style for a topic.",
"For example, kings tend to discuss and order (DO) to officials for retirement and remission topics.",
"And, Sejong the Great and Gwanghaegun discuss and follow (DF) officials' words for agricultural topic.",
"But, for grants topic, Yeonsangun and Gwanghaegun act more arbitrarily (AD) than overall ruling style, and Sejong the Great follows more official's opinions (DF).",
"Conclusion and Future Work We introduced long and large historical documents, The Annals of the Joseon Dynasty.",
"It contains lots of topics such as political, economic, social and cultural over 500 years.",
"We looked at the ruling style of kings in Joseon dynasty and its difference by topics by computational methods.",
"This is ongoing work, and we are looking to find more hidden structures in the AJD.",
"Currently, historians evaluate the king's reputations (Park, 2004; Lee, 2010) .",
"This evaluation is done by many aspects, but one of the important feature is king's ruling style (Kim, 2008) .",
"So we are looking to improve methods for analyzing ruling style more specifically.",
"For example, we will look at the relationship with officials, especially who can make the king follows his opinion.",
"This approach can be used to measure king's leadership."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"3.3",
"4"
],
"paper_header_content": [
"Introduction",
"The Annals of the Joseon Dynasty",
"Categorizing ruling style",
"Method",
"Results and Discussions",
"Conclusion and Future Work"
]
} | GEM-SciDuet-train-26#paper-1027#slide-11 | Research Question 2 | 2. What factors are related with kings leadership? | 2. What factors are related with kings leadership? | [] |
GEM-SciDuet-train-26#paper-1027#slide-12 | 1027 | Five Centuries of Monarchy in Korea: Mining the Text of the Annals of the Joseon Dynasty | We present a quantitative study of the Annals of the Joseon Dynasty, the daily written records of the five hundred years of a monarchy in Korea. We first introduce the corpus, which is a series of books describing the historical facts during the Joseon dynasty. We then define three categories of the monarchial ruling styles based on the written records and compare the twentyfive kings in the monarchy. Finally, we investigate how kings show different ruling styles for various topics within the corpus. Through this study, we introduce a very unique corpus of monarchial records that span an entire monarchy of five hundred years and illustrate how text mining can be applied to answer important historical questions. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84
],
"paper_content_text": [
"Introduction Historical documents are usually studied qualitatively by researchers focusing on a close reading of a small number of documents.",
"However, for a large corpus of historical texts, qualitative methods have limitations, thus quantitative approaches have been introduced recently (Moretti, 2005; Jockers, 2013) .",
"There is also research in applying text mining and natural language processing methods to identify patterns in a corpus of large and longitudinal documents (Mimno, 2012) .",
"In this paper, we introduce a unique corpus of historical documents from the written records that span almost five hundred years from the fourteenth century up to the late nineteenth century within the Korean peninsula.",
"We apply text mining to this corpus to show the power of a computational approach in answering historical questions.",
"We first introduce The Annals of the Joseon Dynasty (Chunchugwan, 1863) .",
"Joseon is the last monarchial nation in the Korean Peninsula from its founding in 1392 up to 1910.",
"The Annals of the Joseon Dynasty are a series of books of historical facts, recorded almost daily during the Joseon dynasty.",
"Whenever a king abdicated the throne, the Chunchugwan (office for annals compilation) updated the Annals for that king from all related official and unofficial documents.",
"The Annals contain political, economic, social and cultural topics during the corresponding time periods.",
"To illustrate the application of a text mining approach, we analyze each king's ruling style from the Annals of the Joseon dynasty.",
"Being a monarchial system, almost all decisions within the government are confirmed by the king, where the king can make the decision on his own, or after discussing it with the government officials.",
"We identify the patterns of each king's decision making and compare the patterns among the kings.",
"The results show interesting patterns of the kings' ruling styles, including the tendency to make arbitrary decisions of the kings who were later dethroned because of tyranny.",
"Additionally, we apply a topic model to the corpus and analyze the kings' ruling style for each topic.",
"The Annals of the Joseon Dynasty In this section, we describe the details of The Annals of the Joseon Dynasty (from here referred to as the AJD) (Chunchugwan, 1863) and our process for building a corpus of the AJD.",
"In its entirety, the AJD consists of records from twentyseven kings over 519 years.",
"However, the last two kings' (Gojong, Sunjong) books are usually excluded from research by historians because many facts are distorted.",
"We follow that convention and use the books of the first twenty-five kings.",
"These records, in their original Chinese text and in the Korean translations, are available publicly through Joseon was a monarchy, but a king could not make all decisions by himself.",
"Instead, Joseon adopted a government system that most of the public issues are discussed with the government officials (Park, 1983; Kim, 2008) before the king made the decisions, which are all recorded in the AJD.",
"Hence, by analyzing the decision making process in the AJD, we can understand each king's ruling style.",
"Categorizing ruling style In Joseon dynasty, the king was the final decision maker.",
"Even when the government officials discussed the public issues, a king's approval was needed.",
"We can categorize each king's decision making process into three types.",
"First, a king can order directly without discussion, which we call Arbitrary Decision (AD).",
"Second, a king can discuss an issue with the officials and then direct his order, which we call Discussion and Order (DO).",
"Third, a king can discuss an issue with the officials and then decide to follow the officials' suggestion, which we call Discussion and Follow (DF).",
"The difference between DO and DF is that in DO, the king acts aggressively with his own opinion.",
"From these observations, we ask two research questions: 1) Can we identify and categorize kings with different ruling styles?",
"2) Do kings' ruling styles differ depending on the topic?",
"Method To understand each king's ruling style, we first identify relevant articles that contain the king's decision making because many of the articles describe non-governmental affairs, such as the weather, or simple status reports.",
"These relevant articles contain direct quotations of the words of the king or the government official.",
"The original texts do not contain any quotation marks, but translators added them to distinguish explicit quotations, which we can use to identify these relevant articles.",
"Its size is 126K, 36% over all articles.",
"Each article contains who said what for an issue, and king's final actions are written mostly in the last part of the article.",
"For example, the underlined last sentence in Figure 1b says that the king followed the official's suggestion.",
"Hence, to identify king's action for each issue, we focus on the last sentence in each article.",
"First, we identify that the setence subject is the king, because some issues are dealt by others.",
"For example, Sunjo, Heonjong and Cheoljong's mother or grandmother ruled as regent, so her decisions are recorded in the AJD.",
"To identify the part of speech in Korean, we used HanNanum (Choi et al., 2012) .",
"And, we investigate the verbs that indicating decisions including order, follow, approval and reject.",
"We use sixty verbs that describe king's decision specifically, and table 2 shows example words.",
"Finally, we classify these decisions into three types: 1) the king orders without discussions with the officials, and we label them as AD, 2) the king orders, approves, or rejects verbs in which their original Chinese characters show active decision making by the king, and we label them as DO, and 3) the king follows or discusses verbs which show passive submission by the king, and we label them as DF.",
"To identify topics, we use a Bayesian topic model, LDA (Blei et al., 2003) .",
"We implement it using Gibbs sampling (Griffiths and Steyvers, 2004) , set 300 topics, and optimized hyperparameters after 100 iterations (Asuncion et al., 2009) .",
"We remove stopwords and words with document frequency of 30 or smaller.",
"Results and Discussions We investigate the difference of ruling style between kings.",
"We run multinomial test (Read and Cressie, 1988 ) between king's ruling style distributions.",
"Result shows that almost all kings are different significant from others (p < 0.001).",
"It means that each king has his own ruling style.",
"Figure 2 shows the distribution of each category of ruling style.",
"Overall, many kings do not act arbitrary.",
"They discuss about many of the national affairs with officials.",
"But, Taejo who is the founder of the Joseon dynasty shows high value of AD.",
"And Yeonsangun and Gwanghaegun who are evaluated as a tyrant also show high value of it.",
"So we can imagine that tyrants tend to act arbitrarily.",
"We also identified those kings whose ruling style differed most from other kings.",
"We use JS divergence which is the symmetric measure of the difference between two probability distributions.",
"We compute JS divergence with each king pair's ruling style distributions.",
"Result shows that Heonjong (0.1220) and Yeonsangun (0.0998) have highest distance value.",
"It means their ruling style are quite different from other kings.",
"Because Heonjong's grandmother governed the Joseon each year, so his actions are quite few.",
"But, unlike Yeonsangun, Gwanghaegun (0.0454) who is known as a tyrant has similar value mean distance from other kings (0.0434).",
"It means his ruling style is quite similar to other kings, and this result supports previous results in Korean historical study (Kye, 2008 ) that re-evaluate his reputation.",
"We investigate the difference of king's ruling style based on the topic.",
"We run multinomial test (Read and Cressie, 1988 ) between king's overall ruling style distribution and specific distribution given a topic.",
"Results show that some ruling styles given a topic are different significant from overall (p < 0.01).",
"It means that the king's ruling style when the topic is given is different from his usual style.",
"Table 3 shows examples of topic.",
"Figure 3 shows four kings' overall ruling style and specific one given a topic.",
"Comparing with the leftmost bars which is overall ruling style of the king, each ruling style given a topic is different from it.",
"And, we can see that kings show similar/different ruling style for a topic.",
"For example, kings tend to discuss and order (DO) to officials for retirement and remission topics.",
"And, Sejong the Great and Gwanghaegun discuss and follow (DF) officials' words for agricultural topic.",
"But, for grants topic, Yeonsangun and Gwanghaegun act more arbitrarily (AD) than overall ruling style, and Sejong the Great follows more official's opinions (DF).",
"Conclusion and Future Work We introduced long and large historical documents, The Annals of the Joseon Dynasty.",
"It contains lots of topics such as political, economic, social and cultural over 500 years.",
"We looked at the ruling style of kings in Joseon dynasty and its difference by topics by computational methods.",
"This is ongoing work, and we are looking to find more hidden structures in the AJD.",
"Currently, historians evaluate the king's reputations (Park, 2004; Lee, 2010) .",
"This evaluation is done by many aspects, but one of the important feature is king's ruling style (Kim, 2008) .",
"So we are looking to improve methods for analyzing ruling style more specifically.",
"For example, we will look at the relationship with officials, especially who can make the king follows his opinion.",
"This approach can be used to measure king's leadership."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"3.3",
"4"
],
"paper_header_content": [
"Introduction",
"The Annals of the Joseon Dynasty",
"Categorizing ruling style",
"Method",
"Results and Discussions",
"Conclusion and Future Work"
]
} | GEM-SciDuet-train-26#paper-1027#slide-12 | Results Topics | Investigate the effects of the topics
Look at the difference between ruling styles overall and given a topic
Different from overall Injo (Weak king)
Remission of sins topic
Kings act DO than overall
Injo tends to DF
Sejong the Great acts DF
Injo tends to give grants to servants than overall | Investigate the effects of the topics
Look at the difference between ruling styles overall and given a topic
Different from overall Injo (Weak king)
Remission of sins topic
Kings act DO than overall
Injo tends to DF
Sejong the Great acts DF
Injo tends to give grants to servants than overall | [] |
GEM-SciDuet-train-26#paper-1027#slide-13 | 1027 | Five Centuries of Monarchy in Korea: Mining the Text of the Annals of the Joseon Dynasty | We present a quantitative study of the Annals of the Joseon Dynasty, the daily written records of the five hundred years of a monarchy in Korea. We first introduce the corpus, which is a series of books describing the historical facts during the Joseon dynasty. We then define three categories of the monarchial ruling styles based on the written records and compare the twentyfive kings in the monarchy. Finally, we investigate how kings show different ruling styles for various topics within the corpus. Through this study, we introduce a very unique corpus of monarchial records that span an entire monarchy of five hundred years and illustrate how text mining can be applied to answer important historical questions. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84
],
"paper_content_text": [
"Introduction Historical documents are usually studied qualitatively by researchers focusing on a close reading of a small number of documents.",
"However, for a large corpus of historical texts, qualitative methods have limitations, thus quantitative approaches have been introduced recently (Moretti, 2005; Jockers, 2013) .",
"There is also research in applying text mining and natural language processing methods to identify patterns in a corpus of large and longitudinal documents (Mimno, 2012) .",
"In this paper, we introduce a unique corpus of historical documents from the written records that span almost five hundred years from the fourteenth century up to the late nineteenth century within the Korean peninsula.",
"We apply text mining to this corpus to show the power of a computational approach in answering historical questions.",
"We first introduce The Annals of the Joseon Dynasty (Chunchugwan, 1863) .",
"Joseon is the last monarchial nation in the Korean Peninsula from its founding in 1392 up to 1910.",
"The Annals of the Joseon Dynasty are a series of books of historical facts, recorded almost daily during the Joseon dynasty.",
"Whenever a king abdicated the throne, the Chunchugwan (office for annals compilation) updated the Annals for that king from all related official and unofficial documents.",
"The Annals contain political, economic, social and cultural topics during the corresponding time periods.",
"To illustrate the application of a text mining approach, we analyze each king's ruling style from the Annals of the Joseon dynasty.",
"Being a monarchial system, almost all decisions within the government are confirmed by the king, where the king can make the decision on his own, or after discussing it with the government officials.",
"We identify the patterns of each king's decision making and compare the patterns among the kings.",
"The results show interesting patterns of the kings' ruling styles, including the tendency to make arbitrary decisions of the kings who were later dethroned because of tyranny.",
"Additionally, we apply a topic model to the corpus and analyze the kings' ruling style for each topic.",
"The Annals of the Joseon Dynasty In this section, we describe the details of The Annals of the Joseon Dynasty (from here referred to as the AJD) (Chunchugwan, 1863) and our process for building a corpus of the AJD.",
"In its entirety, the AJD consists of records from twentyseven kings over 519 years.",
"However, the last two kings' (Gojong, Sunjong) books are usually excluded from research by historians because many facts are distorted.",
"We follow that convention and use the books of the first twenty-five kings.",
"These records, in their original Chinese text and in the Korean translations, are available publicly through Joseon was a monarchy, but a king could not make all decisions by himself.",
"Instead, Joseon adopted a government system that most of the public issues are discussed with the government officials (Park, 1983; Kim, 2008) before the king made the decisions, which are all recorded in the AJD.",
"Hence, by analyzing the decision making process in the AJD, we can understand each king's ruling style.",
"Categorizing ruling style In Joseon dynasty, the king was the final decision maker.",
"Even when the government officials discussed the public issues, a king's approval was needed.",
"We can categorize each king's decision making process into three types.",
"First, a king can order directly without discussion, which we call Arbitrary Decision (AD).",
"Second, a king can discuss an issue with the officials and then direct his order, which we call Discussion and Order (DO).",
"Third, a king can discuss an issue with the officials and then decide to follow the officials' suggestion, which we call Discussion and Follow (DF).",
"The difference between DO and DF is that in DO, the king acts aggressively with his own opinion.",
"From these observations, we ask two research questions: 1) Can we identify and categorize kings with different ruling styles?",
"2) Do kings' ruling styles differ depending on the topic?",
"Method To understand each king's ruling style, we first identify relevant articles that contain the king's decision making because many of the articles describe non-governmental affairs, such as the weather, or simple status reports.",
"These relevant articles contain direct quotations of the words of the king or the government official.",
"The original texts do not contain any quotation marks, but translators added them to distinguish explicit quotations, which we can use to identify these relevant articles.",
"Its size is 126K, 36% over all articles.",
"Each article contains who said what for an issue, and king's final actions are written mostly in the last part of the article.",
"For example, the underlined last sentence in Figure 1b says that the king followed the official's suggestion.",
"Hence, to identify king's action for each issue, we focus on the last sentence in each article.",
"First, we identify that the setence subject is the king, because some issues are dealt by others.",
"For example, Sunjo, Heonjong and Cheoljong's mother or grandmother ruled as regent, so her decisions are recorded in the AJD.",
"To identify the part of speech in Korean, we used HanNanum (Choi et al., 2012) .",
"And, we investigate the verbs that indicating decisions including order, follow, approval and reject.",
"We use sixty verbs that describe king's decision specifically, and table 2 shows example words.",
"Finally, we classify these decisions into three types: 1) the king orders without discussions with the officials, and we label them as AD, 2) the king orders, approves, or rejects verbs in which their original Chinese characters show active decision making by the king, and we label them as DO, and 3) the king follows or discusses verbs which show passive submission by the king, and we label them as DF.",
"To identify topics, we use a Bayesian topic model, LDA (Blei et al., 2003) .",
"We implement it using Gibbs sampling (Griffiths and Steyvers, 2004) , set 300 topics, and optimized hyperparameters after 100 iterations (Asuncion et al., 2009) .",
"We remove stopwords and words with document frequency of 30 or smaller.",
"Results and Discussions We investigate the difference of ruling style between kings.",
"We run multinomial test (Read and Cressie, 1988 ) between king's ruling style distributions.",
"Result shows that almost all kings are different significant from others (p < 0.001).",
"It means that each king has his own ruling style.",
"Figure 2 shows the distribution of each category of ruling style.",
"Overall, many kings do not act arbitrary.",
"They discuss about many of the national affairs with officials.",
"But, Taejo who is the founder of the Joseon dynasty shows high value of AD.",
"And Yeonsangun and Gwanghaegun who are evaluated as a tyrant also show high value of it.",
"So we can imagine that tyrants tend to act arbitrarily.",
"We also identified those kings whose ruling style differed most from other kings.",
"We use JS divergence which is the symmetric measure of the difference between two probability distributions.",
"We compute JS divergence with each king pair's ruling style distributions.",
"Result shows that Heonjong (0.1220) and Yeonsangun (0.0998) have highest distance value.",
"It means their ruling style are quite different from other kings.",
"Because Heonjong's grandmother governed the Joseon each year, so his actions are quite few.",
"But, unlike Yeonsangun, Gwanghaegun (0.0454) who is known as a tyrant has similar value mean distance from other kings (0.0434).",
"It means his ruling style is quite similar to other kings, and this result supports previous results in Korean historical study (Kye, 2008 ) that re-evaluate his reputation.",
"We investigate the difference of king's ruling style based on the topic.",
"We run multinomial test (Read and Cressie, 1988 ) between king's overall ruling style distribution and specific distribution given a topic.",
"Results show that some ruling styles given a topic are different significant from overall (p < 0.01).",
"It means that the king's ruling style when the topic is given is different from his usual style.",
"Table 3 shows examples of topic.",
"Figure 3 shows four kings' overall ruling style and specific one given a topic.",
"Comparing with the leftmost bars which is overall ruling style of the king, each ruling style given a topic is different from it.",
"And, we can see that kings show similar/different ruling style for a topic.",
"For example, kings tend to discuss and order (DO) to officials for retirement and remission topics.",
"And, Sejong the Great and Gwanghaegun discuss and follow (DF) officials' words for agricultural topic.",
"But, for grants topic, Yeonsangun and Gwanghaegun act more arbitrarily (AD) than overall ruling style, and Sejong the Great follows more official's opinions (DF).",
"Conclusion and Future Work We introduced long and large historical documents, The Annals of the Joseon Dynasty.",
"It contains lots of topics such as political, economic, social and cultural over 500 years.",
"We looked at the ruling style of kings in Joseon dynasty and its difference by topics by computational methods.",
"This is ongoing work, and we are looking to find more hidden structures in the AJD.",
"Currently, historians evaluate the king's reputations (Park, 2004; Lee, 2010) .",
"This evaluation is done by many aspects, but one of the important feature is king's ruling style (Kim, 2008) .",
"So we are looking to improve methods for analyzing ruling style more specifically.",
"For example, we will look at the relationship with officials, especially who can make the king follows his opinion.",
"This approach can be used to measure king's leadership."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"3.3",
"4"
],
"paper_header_content": [
"Introduction",
"The Annals of the Joseon Dynasty",
"Categorizing ruling style",
"Method",
"Results and Discussions",
"Conclusion and Future Work"
]
} | GEM-SciDuet-train-26#paper-1027#slide-13 | Results Members | Investigate the effects of the participants in a discussion
Compute the mutual information among ruling styles
Agency officials who remonstrate to the king | Investigate the effects of the participants in a discussion
Compute the mutual information among ruling styles
Agency officials who remonstrate to the king | [] |
GEM-SciDuet-train-26#paper-1027#slide-14 | 1027 | Five Centuries of Monarchy in Korea: Mining the Text of the Annals of the Joseon Dynasty | We present a quantitative study of the Annals of the Joseon Dynasty, the daily written records of the five hundred years of a monarchy in Korea. We first introduce the corpus, which is a series of books describing the historical facts during the Joseon dynasty. We then define three categories of the monarchial ruling styles based on the written records and compare the twentyfive kings in the monarchy. Finally, we investigate how kings show different ruling styles for various topics within the corpus. Through this study, we introduce a very unique corpus of monarchial records that span an entire monarchy of five hundred years and illustrate how text mining can be applied to answer important historical questions. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84
],
"paper_content_text": [
"Introduction Historical documents are usually studied qualitatively by researchers focusing on a close reading of a small number of documents.",
"However, for a large corpus of historical texts, qualitative methods have limitations, thus quantitative approaches have been introduced recently (Moretti, 2005; Jockers, 2013) .",
"There is also research in applying text mining and natural language processing methods to identify patterns in a corpus of large and longitudinal documents (Mimno, 2012) .",
"In this paper, we introduce a unique corpus of historical documents from the written records that span almost five hundred years from the fourteenth century up to the late nineteenth century within the Korean peninsula.",
"We apply text mining to this corpus to show the power of a computational approach in answering historical questions.",
"We first introduce The Annals of the Joseon Dynasty (Chunchugwan, 1863) .",
"Joseon is the last monarchial nation in the Korean Peninsula from its founding in 1392 up to 1910.",
"The Annals of the Joseon Dynasty are a series of books of historical facts, recorded almost daily during the Joseon dynasty.",
"Whenever a king abdicated the throne, the Chunchugwan (office for annals compilation) updated the Annals for that king from all related official and unofficial documents.",
"The Annals contain political, economic, social and cultural topics during the corresponding time periods.",
"To illustrate the application of a text mining approach, we analyze each king's ruling style from the Annals of the Joseon dynasty.",
"Being a monarchial system, almost all decisions within the government are confirmed by the king, where the king can make the decision on his own, or after discussing it with the government officials.",
"We identify the patterns of each king's decision making and compare the patterns among the kings.",
"The results show interesting patterns of the kings' ruling styles, including the tendency to make arbitrary decisions of the kings who were later dethroned because of tyranny.",
"Additionally, we apply a topic model to the corpus and analyze the kings' ruling style for each topic.",
"The Annals of the Joseon Dynasty In this section, we describe the details of The Annals of the Joseon Dynasty (from here referred to as the AJD) (Chunchugwan, 1863) and our process for building a corpus of the AJD.",
"In its entirety, the AJD consists of records from twentyseven kings over 519 years.",
"However, the last two kings' (Gojong, Sunjong) books are usually excluded from research by historians because many facts are distorted.",
"We follow that convention and use the books of the first twenty-five kings.",
"These records, in their original Chinese text and in the Korean translations, are available publicly through Joseon was a monarchy, but a king could not make all decisions by himself.",
"Instead, Joseon adopted a government system that most of the public issues are discussed with the government officials (Park, 1983; Kim, 2008) before the king made the decisions, which are all recorded in the AJD.",
"Hence, by analyzing the decision making process in the AJD, we can understand each king's ruling style.",
"Categorizing ruling style In Joseon dynasty, the king was the final decision maker.",
"Even when the government officials discussed the public issues, a king's approval was needed.",
"We can categorize each king's decision making process into three types.",
"First, a king can order directly without discussion, which we call Arbitrary Decision (AD).",
"Second, a king can discuss an issue with the officials and then direct his order, which we call Discussion and Order (DO).",
"Third, a king can discuss an issue with the officials and then decide to follow the officials' suggestion, which we call Discussion and Follow (DF).",
"The difference between DO and DF is that in DO, the king acts aggressively with his own opinion.",
"From these observations, we ask two research questions: 1) Can we identify and categorize kings with different ruling styles?",
"2) Do kings' ruling styles differ depending on the topic?",
"Method To understand each king's ruling style, we first identify relevant articles that contain the king's decision making because many of the articles describe non-governmental affairs, such as the weather, or simple status reports.",
"These relevant articles contain direct quotations of the words of the king or the government official.",
"The original texts do not contain any quotation marks, but translators added them to distinguish explicit quotations, which we can use to identify these relevant articles.",
"Its size is 126K, 36% over all articles.",
"Each article contains who said what for an issue, and king's final actions are written mostly in the last part of the article.",
"For example, the underlined last sentence in Figure 1b says that the king followed the official's suggestion.",
"Hence, to identify king's action for each issue, we focus on the last sentence in each article.",
"First, we identify that the setence subject is the king, because some issues are dealt by others.",
"For example, Sunjo, Heonjong and Cheoljong's mother or grandmother ruled as regent, so her decisions are recorded in the AJD.",
"To identify the part of speech in Korean, we used HanNanum (Choi et al., 2012) .",
"And, we investigate the verbs that indicating decisions including order, follow, approval and reject.",
"We use sixty verbs that describe king's decision specifically, and table 2 shows example words.",
"Finally, we classify these decisions into three types: 1) the king orders without discussions with the officials, and we label them as AD, 2) the king orders, approves, or rejects verbs in which their original Chinese characters show active decision making by the king, and we label them as DO, and 3) the king follows or discusses verbs which show passive submission by the king, and we label them as DF.",
"To identify topics, we use a Bayesian topic model, LDA (Blei et al., 2003) .",
"We implement it using Gibbs sampling (Griffiths and Steyvers, 2004) , set 300 topics, and optimized hyperparameters after 100 iterations (Asuncion et al., 2009) .",
"We remove stopwords and words with document frequency of 30 or smaller.",
"Results and Discussions We investigate the difference of ruling style between kings.",
"We run multinomial test (Read and Cressie, 1988 ) between king's ruling style distributions.",
"Result shows that almost all kings are different significant from others (p < 0.001).",
"It means that each king has his own ruling style.",
"Figure 2 shows the distribution of each category of ruling style.",
"Overall, many kings do not act arbitrary.",
"They discuss about many of the national affairs with officials.",
"But, Taejo who is the founder of the Joseon dynasty shows high value of AD.",
"And Yeonsangun and Gwanghaegun who are evaluated as a tyrant also show high value of it.",
"So we can imagine that tyrants tend to act arbitrarily.",
"We also identified those kings whose ruling style differed most from other kings.",
"We use JS divergence which is the symmetric measure of the difference between two probability distributions.",
"We compute JS divergence with each king pair's ruling style distributions.",
"Result shows that Heonjong (0.1220) and Yeonsangun (0.0998) have highest distance value.",
"It means their ruling style are quite different from other kings.",
"Because Heonjong's grandmother governed the Joseon each year, so his actions are quite few.",
"But, unlike Yeonsangun, Gwanghaegun (0.0454) who is known as a tyrant has similar value mean distance from other kings (0.0434).",
"It means his ruling style is quite similar to other kings, and this result supports previous results in Korean historical study (Kye, 2008 ) that re-evaluate his reputation.",
"We investigate the difference of king's ruling style based on the topic.",
"We run multinomial test (Read and Cressie, 1988 ) between king's overall ruling style distribution and specific distribution given a topic.",
"Results show that some ruling styles given a topic are different significant from overall (p < 0.01).",
"It means that the king's ruling style when the topic is given is different from his usual style.",
"Table 3 shows examples of topic.",
"Figure 3 shows four kings' overall ruling style and specific one given a topic.",
"Comparing with the leftmost bars which is overall ruling style of the king, each ruling style given a topic is different from it.",
"And, we can see that kings show similar/different ruling style for a topic.",
"For example, kings tend to discuss and order (DO) to officials for retirement and remission topics.",
"And, Sejong the Great and Gwanghaegun discuss and follow (DF) officials' words for agricultural topic.",
"But, for grants topic, Yeonsangun and Gwanghaegun act more arbitrarily (AD) than overall ruling style, and Sejong the Great follows more official's opinions (DF).",
"Conclusion and Future Work We introduced long and large historical documents, The Annals of the Joseon Dynasty.",
"It contains lots of topics such as political, economic, social and cultural over 500 years.",
"We looked at the ruling style of kings in Joseon dynasty and its difference by topics by computational methods.",
"This is ongoing work, and we are looking to find more hidden structures in the AJD.",
"Currently, historians evaluate the king's reputations (Park, 2004; Lee, 2010) .",
"This evaluation is done by many aspects, but one of the important feature is king's ruling style (Kim, 2008) .",
"So we are looking to improve methods for analyzing ruling style more specifically.",
"For example, we will look at the relationship with officials, especially who can make the king follows his opinion.",
"This approach can be used to measure king's leadership."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"3.3",
"4"
],
"paper_header_content": [
"Introduction",
"The Annals of the Joseon Dynasty",
"Categorizing ruling style",
"Method",
"Results and Discussions",
"Conclusion and Future Work"
]
} | GEM-SciDuet-train-26#paper-1027#slide-14 | Results Time | Yeonsangun (Tyrant) Injo (Weak king)
Look at the temporal difference of a king
Investigate the changes over time
Yeonsangun becomes more arbitrary over time
Injo stays consistent in his ruling style | Yeonsangun (Tyrant) Injo (Weak king)
Look at the temporal difference of a king
Investigate the changes over time
Yeonsangun becomes more arbitrary over time
Injo stays consistent in his ruling style | [] |
GEM-SciDuet-train-26#paper-1027#slide-15 | 1027 | Five Centuries of Monarchy in Korea: Mining the Text of the Annals of the Joseon Dynasty | We present a quantitative study of the Annals of the Joseon Dynasty, the daily written records of the five hundred years of a monarchy in Korea. We first introduce the corpus, which is a series of books describing the historical facts during the Joseon dynasty. We then define three categories of the monarchial ruling styles based on the written records and compare the twentyfive kings in the monarchy. Finally, we investigate how kings show different ruling styles for various topics within the corpus. Through this study, we introduce a very unique corpus of monarchial records that span an entire monarchy of five hundred years and illustrate how text mining can be applied to answer important historical questions. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84
],
"paper_content_text": [
"Introduction Historical documents are usually studied qualitatively by researchers focusing on a close reading of a small number of documents.",
"However, for a large corpus of historical texts, qualitative methods have limitations, thus quantitative approaches have been introduced recently (Moretti, 2005; Jockers, 2013) .",
"There is also research in applying text mining and natural language processing methods to identify patterns in a corpus of large and longitudinal documents (Mimno, 2012) .",
"In this paper, we introduce a unique corpus of historical documents from the written records that span almost five hundred years from the fourteenth century up to the late nineteenth century within the Korean peninsula.",
"We apply text mining to this corpus to show the power of a computational approach in answering historical questions.",
"We first introduce The Annals of the Joseon Dynasty (Chunchugwan, 1863) .",
"Joseon is the last monarchial nation in the Korean Peninsula from its founding in 1392 up to 1910.",
"The Annals of the Joseon Dynasty are a series of books of historical facts, recorded almost daily during the Joseon dynasty.",
"Whenever a king abdicated the throne, the Chunchugwan (office for annals compilation) updated the Annals for that king from all related official and unofficial documents.",
"The Annals contain political, economic, social and cultural topics during the corresponding time periods.",
"To illustrate the application of a text mining approach, we analyze each king's ruling style from the Annals of the Joseon dynasty.",
"Being a monarchial system, almost all decisions within the government are confirmed by the king, where the king can make the decision on his own, or after discussing it with the government officials.",
"We identify the patterns of each king's decision making and compare the patterns among the kings.",
"The results show interesting patterns of the kings' ruling styles, including the tendency to make arbitrary decisions of the kings who were later dethroned because of tyranny.",
"Additionally, we apply a topic model to the corpus and analyze the kings' ruling style for each topic.",
"The Annals of the Joseon Dynasty In this section, we describe the details of The Annals of the Joseon Dynasty (from here referred to as the AJD) (Chunchugwan, 1863) and our process for building a corpus of the AJD.",
"In its entirety, the AJD consists of records from twentyseven kings over 519 years.",
"However, the last two kings' (Gojong, Sunjong) books are usually excluded from research by historians because many facts are distorted.",
"We follow that convention and use the books of the first twenty-five kings.",
"These records, in their original Chinese text and in the Korean translations, are available publicly through Joseon was a monarchy, but a king could not make all decisions by himself.",
"Instead, Joseon adopted a government system that most of the public issues are discussed with the government officials (Park, 1983; Kim, 2008) before the king made the decisions, which are all recorded in the AJD.",
"Hence, by analyzing the decision making process in the AJD, we can understand each king's ruling style.",
"Categorizing ruling style In Joseon dynasty, the king was the final decision maker.",
"Even when the government officials discussed the public issues, a king's approval was needed.",
"We can categorize each king's decision making process into three types.",
"First, a king can order directly without discussion, which we call Arbitrary Decision (AD).",
"Second, a king can discuss an issue with the officials and then direct his order, which we call Discussion and Order (DO).",
"Third, a king can discuss an issue with the officials and then decide to follow the officials' suggestion, which we call Discussion and Follow (DF).",
"The difference between DO and DF is that in DO, the king acts aggressively with his own opinion.",
"From these observations, we ask two research questions: 1) Can we identify and categorize kings with different ruling styles?",
"2) Do kings' ruling styles differ depending on the topic?",
"Method To understand each king's ruling style, we first identify relevant articles that contain the king's decision making because many of the articles describe non-governmental affairs, such as the weather, or simple status reports.",
"These relevant articles contain direct quotations of the words of the king or the government official.",
"The original texts do not contain any quotation marks, but translators added them to distinguish explicit quotations, which we can use to identify these relevant articles.",
"Its size is 126K, 36% over all articles.",
"Each article contains who said what for an issue, and king's final actions are written mostly in the last part of the article.",
"For example, the underlined last sentence in Figure 1b says that the king followed the official's suggestion.",
"Hence, to identify king's action for each issue, we focus on the last sentence in each article.",
"First, we identify that the setence subject is the king, because some issues are dealt by others.",
"For example, Sunjo, Heonjong and Cheoljong's mother or grandmother ruled as regent, so her decisions are recorded in the AJD.",
"To identify the part of speech in Korean, we used HanNanum (Choi et al., 2012) .",
"And, we investigate the verbs that indicating decisions including order, follow, approval and reject.",
"We use sixty verbs that describe king's decision specifically, and table 2 shows example words.",
"Finally, we classify these decisions into three types: 1) the king orders without discussions with the officials, and we label them as AD, 2) the king orders, approves, or rejects verbs in which their original Chinese characters show active decision making by the king, and we label them as DO, and 3) the king follows or discusses verbs which show passive submission by the king, and we label them as DF.",
"To identify topics, we use a Bayesian topic model, LDA (Blei et al., 2003) .",
"We implement it using Gibbs sampling (Griffiths and Steyvers, 2004) , set 300 topics, and optimized hyperparameters after 100 iterations (Asuncion et al., 2009) .",
"We remove stopwords and words with document frequency of 30 or smaller.",
"Results and Discussions We investigate the difference of ruling style between kings.",
"We run multinomial test (Read and Cressie, 1988 ) between king's ruling style distributions.",
"Result shows that almost all kings are different significant from others (p < 0.001).",
"It means that each king has his own ruling style.",
"Figure 2 shows the distribution of each category of ruling style.",
"Overall, many kings do not act arbitrary.",
"They discuss about many of the national affairs with officials.",
"But, Taejo who is the founder of the Joseon dynasty shows high value of AD.",
"And Yeonsangun and Gwanghaegun who are evaluated as a tyrant also show high value of it.",
"So we can imagine that tyrants tend to act arbitrarily.",
"We also identified those kings whose ruling style differed most from other kings.",
"We use JS divergence which is the symmetric measure of the difference between two probability distributions.",
"We compute JS divergence with each king pair's ruling style distributions.",
"Result shows that Heonjong (0.1220) and Yeonsangun (0.0998) have highest distance value.",
"It means their ruling style are quite different from other kings.",
"Because Heonjong's grandmother governed the Joseon each year, so his actions are quite few.",
"But, unlike Yeonsangun, Gwanghaegun (0.0454) who is known as a tyrant has similar value mean distance from other kings (0.0434).",
"It means his ruling style is quite similar to other kings, and this result supports previous results in Korean historical study (Kye, 2008 ) that re-evaluate his reputation.",
"We investigate the difference of king's ruling style based on the topic.",
"We run multinomial test (Read and Cressie, 1988 ) between king's overall ruling style distribution and specific distribution given a topic.",
"Results show that some ruling styles given a topic are different significant from overall (p < 0.01).",
"It means that the king's ruling style when the topic is given is different from his usual style.",
"Table 3 shows examples of topic.",
"Figure 3 shows four kings' overall ruling style and specific one given a topic.",
"Comparing with the leftmost bars which is overall ruling style of the king, each ruling style given a topic is different from it.",
"And, we can see that kings show similar/different ruling style for a topic.",
"For example, kings tend to discuss and order (DO) to officials for retirement and remission topics.",
"And, Sejong the Great and Gwanghaegun discuss and follow (DF) officials' words for agricultural topic.",
"But, for grants topic, Yeonsangun and Gwanghaegun act more arbitrarily (AD) than overall ruling style, and Sejong the Great follows more official's opinions (DF).",
"Conclusion and Future Work We introduced long and large historical documents, The Annals of the Joseon Dynasty.",
"It contains lots of topics such as political, economic, social and cultural over 500 years.",
"We looked at the ruling style of kings in Joseon dynasty and its difference by topics by computational methods.",
"This is ongoing work, and we are looking to find more hidden structures in the AJD.",
"Currently, historians evaluate the king's reputations (Park, 2004; Lee, 2010) .",
"This evaluation is done by many aspects, but one of the important feature is king's ruling style (Kim, 2008) .",
"So we are looking to improve methods for analyzing ruling style more specifically.",
"For example, we will look at the relationship with officials, especially who can make the king follows his opinion.",
"This approach can be used to measure king's leadership."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"3.3",
"4"
],
"paper_header_content": [
"Introduction",
"The Annals of the Joseon Dynasty",
"Categorizing ruling style",
"Method",
"Results and Discussions",
"Conclusion and Future Work"
]
} | GEM-SciDuet-train-26#paper-1027#slide-15 | Conclusion | Introduced the Annals of the Joseon Dynasty
Long and large historical documents
Translated and annotated corpus
Measured the kings leadership styles
Relationship with factors (Topics, Members, Time) | Introduced the Annals of the Joseon Dynasty
Long and large historical documents
Translated and annotated corpus
Measured the kings leadership styles
Relationship with factors (Topics, Members, Time) | [] |
GEM-SciDuet-train-27#paper-1028#slide-0 | 1028 | Proceedings of the 2nd Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal The Role of Personality, Age and Gender in Tweeting about Mental Illnesses | Mental illnesses, such as depression and post traumatic stress disorder (PTSD), are highly underdiagnosed globally. Populations sharing similar demographics and personality traits are known to be more at risk than others. In this study, we characterise the language use of users disclosing their mental illness on Twitter. Language-derived personality and demographic estimates show surprisingly strong performance in distinguishing users that tweet a diagnosis of depression or PTSD from random controls, reaching an area under the receiveroperating characteristic curve -AUC -of around .8 in all our binary classification tasks. In fact, when distinguishing users disclosing depression from those disclosing PTSD, the single feature of estimated age shows nearly as strong performance (AUC = .806) as using thousands of topics (AUC = .819) or tens of thousands of n-grams (AUC = .812). We also find that differential language analyses, controlled for demographics, recover many symptoms associated with the mental illnesses in the clinical literature. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169
],
"paper_content_text": [
"Introduction Mental illnesses, such as depression and post traumatic stress disorder (PTSD) represent a large share of the global burden of disease (Üstün et al., 2004; Mathers and Loncar, 2006) , but are underdiagnosed and undertreated around the world (Prince et al., 2007) .",
"Previous research has demonstrated the important role of demographic factors in depression risk.",
"For example, while clinically-assessed depression is estimated at 6.6% in a 12-month interval for U.S. adults , the prevalence in males is 3-5%, while the prevalence is 8-10% in females (Andrade et al., 2003) .",
"Similarly, prevalence of PTSD among U.S. adults in any 12-month period is estimated at 3.5% (Kessler et al., 2005b ) -1.8% in males and 5.2% in females -yet this risk is not distributed evenly across age groups; prevalence of PTSD increases throughout the majority of the lifespan to reach a peak of 9.2% between the ages of 49-59, before dropping sharply to 2.5% past the age of 60.",
"(Kessler et al., 2005a) .",
"Large scale user-generated content provides the opportunity to extract information not only about events, but also about the person posting them.",
"Using automatic methods, a wide set of user characteristics, such as age, gender, personality, location and income have been shown to be predictable from shared social media text.",
"The same holds for mental illnesses, from users expressing symptoms of their illness (e.g.",
"low mood, focus on the self, high anxiety) to talking about effects of their illness (e.g.",
"mentioning medications and therapy) and to even self-disclosing the illness.",
"This study represents an analysis of language use in users who share their mental illness though social media, in this case depression and PTSD.",
"We advocate adjusting for important underlying demographic factors, such as age and gender, to avoid confounding by language specific to these underlying characteristics.",
"The age and gender trends from the U.S. population are present in our dataset, although imperfectly, given the biases of self-reports and social media sampling.",
"Our differential language analyses show symptoms associated with these illnesses congruent with existing clinical theory and consequences of diagnoses.",
"In addition to age and gender, we focus on the important role of inferred personality in predicting 21 mental illness.",
"We show that a model which uses only the text-predicted user level 'Big Five' personality dimensions plus age and gender perform with high accuracy, comparable to methods that use standard dictionaries of psychology as features.",
"Users who self-report a diagnosis appear more neurotic and more introverted when compared to average users.",
"Data We use a dataset of Twitter users reported to suffer from a mental illness, specifically depression and post traumatic stress disorder (PTSD).",
"This dataset was first introduced in (Coppersmith et al., 2014a) .",
"The self-reports are collected by searching a large Twitter archive for disclosures using a regular expression (e.g.",
"'I have been diagnosed with depression').",
"Candidate users were filtered manually and then all their most recent tweets have been continuously crawled using the Twitter Search API.",
"The selfdisclosure messages were excluded from the dataset and from the estimation of user inferred demographics and personality scores.",
"The control users were selected at random from Twitter.",
"In total there are 370 users diagnosed only with PTSD, 483 only with depression and 1104 control users.",
"On average, each user has 3400.8 messages.",
"As Coppersmith et al.",
"(2014b) acknowledge, this method of collection is susceptible to multiple biases, but represents a simple way to build a large dataset of users and their textual information.",
"Features We use the Twitter posts of a user to infer several user traits which we expect to be relevant to mental illnesses based on standard clinical criteria (American Psychiatric Association, 2013).",
"Recently, automatic user profiling methods have used on usergenerated text and complementary features in order to predict different user traits such as: age (Nguyen et al., 2011) , gender (Sap et al., 2014) , location (Cheng et al., 2010) , impact (Lampos et al., 2014) , political preference (Volkova et al., 2014) , temporal orientation or personality (Schwartz et al., 2013) .",
"Age, Gender and Personality We use the methods developed in (Schwartz et al., 2013) to assign each user scores for age, gender and personality from the popular five factor model of personality -'Big Five ' -(McCrae and John, 1992) , which consists of five dimensions: extraversion, agreeableness, conscientiousness, neuroticism and openness to experience.",
"The model was trained on a large sample of around 70,000 Facebook users who have taken Big Five personality tests and shared their posts using a model using 1-3 grams and topics as features Schwartz et al., 2013) .",
"This model achieves R > .3 predictive performance for all five traits.",
"This dataset is also used to obtain age and gender adjusted personality and topic distributions.",
"Affect and Intensity Emotions play an important role in the diagnosis of mental illness (American Psychiatric Association, 2013) .",
"We aim to capture the expression of users' emotions through their generated posts.",
"We characterize expressions along the dimensions of affect (from positive to negative) and intensity (from low to high), which correspond to the two primary axes of the circumplex model, a well-established system for describing emotional states (Posner et al., 2005) .",
"Machine learning approaches perform significantly better at quantifying emotion/sentiment from text compared to lexicon-based methods (Pang and Lee, 2008) .",
"Emotions are expressed at message-level.",
"Consequently, we trained a text classification model on 3,000 Facebook posts labeled by affect and intensity using unigrams as features.",
"We applied this model on each user's posts and aggregated over them to obtain a user score for both dimensions.",
"Textual Features For our qualitative text analysis we extract textual features from all of a user's Twitter posts.",
"Traditional psychological studies use a closed-vocabulary approach to modelling text.",
"The most popular method is based on Linguistic Inquiry and Word Count (LIWC) .",
"In LIWC, psychological theory was used to build 64 different categories.",
"These include different parts-of-speech, topical categories and emotions.",
"Each user is thereby represented as a distribution over these categories.",
"We also use all frequent 1-3 grams (used by more than 10% of users in our dataset), where we use pointwise mutual information (PMI) to filter infrequent 2-3 grams.",
"For a better qualitative assessment and to reduce risk of overfitting, we use a set of topics as a form of dimensionality reduction.",
"We use the 2,000 clusters introduced in (Schwartz et al., 2013) obtained by applying Latent Dirichlet Allocation (Blei et al., 2003) , the most popular topic model, to a large set of Facebook posts.",
"Prediction In this section we present an analysis of the predictive power of inferred user-level features.",
"We use the methods introduced in Section 3 to predict nine user level scores: age, gender, affect, intensity and the Big Five personality traits.",
"The three populations in our dataset are used to formulate three binary classification problems in order to analyse specific pairwise group peculiarities.",
"Users having both PTSD and depression are held-out when classifying between these two classes.",
"To assess the power of our text-derived features, we use as features broader textual features such as the LIWC categories, the LDA inferred topics and frequent 1-3 grams.",
"We train binary logistic regression classifiers (Pedregosa et al., 2011) with Elastic Net regularisation (Zou and Hastie, 2005) .",
"In Table 1 we report the performance using 10-fold cross-validation.",
"Performance is measured using ROC area under the curve (ROC AUC), an adequate measure when the classes are imbalanced.",
"A more thorough study of predictive performance for identifying PTSD and depressed users is presented in (Preoţiuc-Pietro et al., 2015) .",
"Our results show the following: • Age alone improves over chance and is highly predictive when classifying PTSD users.",
"To visualise the effect of age, Figure 1 shows the probability density function in our three populations.",
"This highlights that PTSD users are consistently predicted older than both controls and depressed users.",
"This is in line with findings from the National Comorbidity Survey and replications (Kessler et al., 2005a ; Kessler et al., Figure 1 : Age density functions for each group.",
"• Gender is only weakly predictive of any mental illness, although significantly above chance in depressed vs. controls (p < .01, DeLong test 1 ).",
"Interestingly, in this task age and gender combined improve significantly above each individual prediction, illustrating they contain complementary information.",
"Consequently, at least when analysing depression, gender should be accounted for in addition to age.",
"• Personality alone obtains very good predictive accuracies, reaching over .8 ROC AUC for classifying depressed vs. PTSD.",
"In general, personality features alone perform with strong predictive accuracy, within .1 of >5000 unigram features or 2000 topics.",
"Adding age and gender information further improves predictive power (C-P p < .01, D-P p < .01, DeLong test) when PTSD is one of the compared groups.",
"In Figure 2 we show the mean personality scores across the three groups.",
"In this dataset, PTSD users score highest on average in openness with depressed users scoring lowest.",
"However, neuroticism is the largest separator between mentally ill users and the controls, with depressed having slightly higher levels of neuroticism than PTSD.",
"Neuroticism alone has an ROC AUC of .732 in prediction depression vs. control and .674 in predicting PTSD vs. control.",
"Controls score higher on extraversion, a trait related to the frequency and intensity of positive emotions (Smillie et al., 2012) .",
"Controlling for age (Figure 2b ) significantly reduces the initial association between PTSD and higher conscientiousness, because PTSD users are likely to be older, and conscientiousness tends to increase with age (Soto et al., 2011) .",
"After controlling, depressed users score lowest on conscientiousness, while PTSD and controls are close to each other.",
"• Average affect and intensity achieve modest predictive performance, although significant (C-D p < .001, D-P p < .001, DeLong test) when one of the compared groups are depressed.",
"We use the two features to map users to the emotion circumplex in Figure 3 .",
"On average, control users expressed both higher intensity and higher (i.e.",
"more positive) affect, while depressed users were lowest on both.",
"This is consistent with the lowered (i.e.",
"more negative) affect typically seen in both PTSD and depressed patients, and the increased intensity/arousal among PTSD users may correspond to more frequent expressions of anxiety, which is characterized by high arousal and lower/negative affect (American Psychiatric Association, 2013).",
"• Textual features obtain high predictive performance.",
"Out of these, LIWC performs the worst, while the topics, unigrams and 1-3 grams have similarly high performance.",
"In addition to ROC AUC scores, we present ROC curves for all three binary prediction tasks in Figures 4a, 4b and 4c .",
"ROC curves are specifically useful for medical practitioners because the classification threshold can be adjusted to choose an applicationappropriate level of false positives.",
"For comparison, we display methods using only age and gender; age, gender and personality combined, as well as LIWC and the LDA topics.",
"For classifying depressed users from controls, a true positive rate of ∼ 0.6 can be achieved at a false positive rate of ∼ 0.2 using personality, age and gender alone, with an increase to up to ∼ 0.7 when PTSD users are one of the groups.",
"When classifying PTSD users, age is the most important factor.",
"Separating between depressed and PTSD is almost exclusively a factor of age.",
"This suggests that a application in a real life scenario will likely overpredict older users to have PTSD.",
"Language Analysis The very high predictive power of the user-level features and textual features motivates us to analyse the linguistic features associated with each group, taking into account age and gender.",
"We study differences in language between groups using differential language analysis -DLA (Schwartz et al., 2013) .",
"This method aims to find all the most discriminative features between two groups by correlating each individual feature (1-3 gram or topic) to the class label.",
"In our case, age and gender are included as covariates in order to control for the effect they may have on the outcome.",
"Since a large number of features are explored, we consider coefficients significant if they meet a Bonferroni-corrected two-tailed p-value of less than 0.001.",
"Language of Depression The word cloud in Figure 5a displays the 1-3 grams that most distinguish the depressed users from the set of control users.",
"Many features show face validity (e.g.",
"'depressed'), but also appear to represent a number of the cognitive and emotional processes implicated in depression in the literature (American Psychiatric Association, 2013).",
"1-3 grams seem to disclose information relating to illness and illness management (e.g.",
"'depressed', 'illness', 'meds', 'pills', 'therapy').",
"In some of the most strongly correlated features we also observe an increased focus on the self (e.g.",
"'I', 'I am', 'I have', 'I haven't', 'I was', 'myself') which has been found to accompany depression in many studies and often accompanies states of psychological distress (Rude et al., 2004; Stirman and Pennebaker, 2001; Bucci and Freedman, 1981) .",
"Depression classically relies on the presence of two sets of core symptoms: sustained periods of low mood (dysphoria) and low interest (anhedonia) (American Psychiatric Association, 2013) .",
"Phrases such as 'cry' and 'crying' suggest low mood, while 'anymore' and 'I used to' may suggest a discontinuation of activities.",
"Suicidal ideations or more general thoughts of death and dying are symptoms used in the diagnosis of depression, and even though they are relatively rarely mentioned (grey color), are identified in the differential language analysis (e.g.",
"'suicide', 'to die').",
"Beyond what is generally thought of as the key symptoms of depression discussed above, the differential language analysis also suggests that anger and interpersonal hostility ('fucking') feature significantly in the language use of depressed users.",
"The 10 topics most associated with depression (correlation values ranging from R = .282 to R = .229) suggest similar themes, including dysphoria (e.g.",
"'lonely', 'sad', 'crying' -Figures 6b, 6c, 6f ) and thoughts of death (e.g.",
"'suicide' - Figure 6h ).",
"Figure 5 : The word clouds show the 1-3 grams most correlated with each group having a mental illness, with the set of control users serving as the contrastive set in both cases.",
"The size of the 1-3 gram is scaled by the correlation to binary depression label (point-biserial correlation).",
"The color indexes relative frequency, from grey (rarely used) through blue (moderately used) to red (frequently used).",
"Correlations are controlled for age and gender.",
"Language of PTSD The word cloud in Figure 5b and topic clouds in Figure 7 display the 1-3 grams and topics most correlated with PTSD, with topic correlation values ranging from R = .280 to R = .237.",
"On the whole, the language most predictive of PTSD does not map as cleanly onto the symptoms and criteria for diagnosis of PTSD as was the case with depression.",
"Across topics and 1-3 grams, the language most correlated with PTSD suggests 'depression', disease management (e.g.",
"'pain', 'pills', 'meds' - Figure 7c ) and a focus on the self (e.g.",
"'I had', 'I was', 'I am', 'I would').",
"Similarly, language is suggestive of death (e.g.",
"'suicide', 'suicidal').",
"Compared to the language of depressed users, themes within the language of users with PTSD appear to reference traumatic experiences that are required for a diagnosis of PTSD (e.g.",
"'murdered', 'died'), as well as the resultant states of fear-like psychological distress (e.g.",
"'terrified', 'anxiety').",
"PTSD and Depression From our predictive experiments and Figure 4c , we see that language-predicted age almost completely differentiates between PTSD and depressed users.",
"Consequently, we find only a few features that distinguish between the two groups when controlling for age.",
"To visualise differences between the diseases we visualize topic usage in both groups in Figure 8 .",
"This shows standardised usage in both groups for each topic.",
"As an additional factor (color), we include Figure 6 : The LDA topics most correlated with depression controlling for age and gender, with the set of control users serving as the contrastive set.",
"Word size is proportional to the probability of the word within the topics.",
"Color is for display only.",
"Figure 7 : The LDA topics most correlated with PTSD controlling for age and gender, with the set of control users serving as the contrastive set.",
"Word size is proportional to the probability of the word within the topics.",
"Color is for display only.",
"the personality trait of neuroticism.",
"This plays the most important role in separating between mentally ill users and controls.",
"The topics marked by arrows in Figure 8 are some of the topics most used by users with depression and PTSD shown above in Figures 6-7 .",
"Of the three topics, the topic shown in Figure 6h has 'suicide' as the most prevalent word.",
"This topic's use is elevated for both depression and PTSD.",
"Figure 6f shows a topic used mostly by depressed users, while Figure 7c highlights a topic used mainly by users with PTSD.",
"Related Work Prior studies have similarly examined the efficacy of utilising social media data, like Facebook and Twitter, to ascertain the presence of both depression and PTSD.",
"For instance, Coppersmith et al.",
"(2014b) analyse differences in patterns of language use.",
"They report that individuals with PTSD were significantly more likely to use third person pronouns and significantly less likely to use second person pronouns, without mentioning differences in the use of first person pronouns.",
"This is in contrast to the strong differences in first person pronoun use among depressed individuals documented in the literature ( Rude et al., 2004; Stirman and Pennebaker, 2001) , confirmed in prior Twitter studies (Coppersmith et al., 2014a; De Choudhury et al., 2013) and replicated here.",
"De Choudhury et al.",
"(2013) explore the relationships between social media postings and depressive status, finding that geographic variables can alter one's risk.",
"They show that cities for which the highest numbers of depressive Twitter users are predicted correlate with the cities with the known highest depression rates nationwide; depressive tweets follow an expected diurnal and annual rhythm (peaking at night and during winter); and women exhibit an increased risk of depression relative to men, consistent with known psychological trends.",
"These studies thus demonstrate the utility of using social media outlets to capture nuanced data about an individual's daily psychological affect to predict pathology, and suggest that geographic and demographic factors may alter the prevalence of psychological ill-being.",
"The present study is unique in its efforts to control for some of these demographic factors, such as personality and age, that demonstrably influence an individual's pattern of language use.",
"Further, these demographic characteristics are known to significantly alter patterns e.g.",
"pronoun use (Pennebaker, 2011) .",
"This highlights the utility of controlling for these factors when analysing pathological states like depression or PTSD.",
"Conclusions This study presented a qualitative analysis of mental illness language use in users who disclosed their diagnoses.",
"For users diagnosed with depression or PTSD, we have identified both symptoms and effects of their mental condition from user-generated content.",
"The majority of our results map to clinical theory, confirming the validity of our methodology and the relevance of the dataset.",
"In our experiments, we accounted for text-derived user features, such as demographics (e.g.",
"age, gender) and personality.",
"Text-derived personality alone showed high predictive performance, in one case reaching similar performance to using orders of magnitude more textual features.",
"Our study further demonstrated the potential for using social media as a means for predicting and analysing the linguistic markers of mental illnesses.",
"However, it also raises a few questions.",
"First, although apparently easily predictable, the difference between depressed and PTSD users is largely only due to predicted age.",
"Sample demographics also appear to be different than the general population, making predictive models fitted on this data to be susceptible to over-predicting certain demographics.",
"Secondly, the language associated with a selfreported diagnosis of depression and PTSD has a large overlap with the language predictive of personality.",
"This suggests that personality may be explanatory of a particular kind of behavior: posting about mental illness diagnoses online.",
"The mental illness labels thus acquired likely have personality confounds 'baked into them', stressing the need for using stronger ground truth such as given by clinicians.",
"Further, based on the scope of the applicationswhether screening or analysis of psychological risk factors -user-generated data should at minimum be temporally partitioned to encompass content shared before and after the diagnosis.",
"This allows one to separate mentions of symptoms from discussions of and consequences of their diagnosis, such as the use of medications.",
"28"
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"5",
"5.1",
"5.2",
"5.3",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Data",
"Features",
"Age, Gender and Personality",
"Affect and Intensity",
"Textual Features",
"Prediction",
"Language Analysis",
"Language of Depression",
"Language of PTSD",
"PTSD and Depression",
"Related Work",
"Conclusions"
]
} | GEM-SciDuet-train-27#paper-1028#slide-0 | Problem | Mental illnesses are underdiagnosed
Explore the predictive power of demographic and personality based features.
Find insights provided by each feature. | Mental illnesses are underdiagnosed
Explore the predictive power of demographic and personality based features.
Find insights provided by each feature. | [] |
GEM-SciDuet-train-27#paper-1028#slide-1 | 1028 | Proceedings of the 2nd Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal The Role of Personality, Age and Gender in Tweeting about Mental Illnesses | Mental illnesses, such as depression and post traumatic stress disorder (PTSD), are highly underdiagnosed globally. Populations sharing similar demographics and personality traits are known to be more at risk than others. In this study, we characterise the language use of users disclosing their mental illness on Twitter. Language-derived personality and demographic estimates show surprisingly strong performance in distinguishing users that tweet a diagnosis of depression or PTSD from random controls, reaching an area under the receiveroperating characteristic curve -AUC -of around .8 in all our binary classification tasks. In fact, when distinguishing users disclosing depression from those disclosing PTSD, the single feature of estimated age shows nearly as strong performance (AUC = .806) as using thousands of topics (AUC = .819) or tens of thousands of n-grams (AUC = .812). We also find that differential language analyses, controlled for demographics, recover many symptoms associated with the mental illnesses in the clinical literature. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169
],
"paper_content_text": [
"Introduction Mental illnesses, such as depression and post traumatic stress disorder (PTSD) represent a large share of the global burden of disease (Üstün et al., 2004; Mathers and Loncar, 2006) , but are underdiagnosed and undertreated around the world (Prince et al., 2007) .",
"Previous research has demonstrated the important role of demographic factors in depression risk.",
"For example, while clinically-assessed depression is estimated at 6.6% in a 12-month interval for U.S. adults , the prevalence in males is 3-5%, while the prevalence is 8-10% in females (Andrade et al., 2003) .",
"Similarly, prevalence of PTSD among U.S. adults in any 12-month period is estimated at 3.5% (Kessler et al., 2005b ) -1.8% in males and 5.2% in females -yet this risk is not distributed evenly across age groups; prevalence of PTSD increases throughout the majority of the lifespan to reach a peak of 9.2% between the ages of 49-59, before dropping sharply to 2.5% past the age of 60.",
"(Kessler et al., 2005a) .",
"Large scale user-generated content provides the opportunity to extract information not only about events, but also about the person posting them.",
"Using automatic methods, a wide set of user characteristics, such as age, gender, personality, location and income have been shown to be predictable from shared social media text.",
"The same holds for mental illnesses, from users expressing symptoms of their illness (e.g.",
"low mood, focus on the self, high anxiety) to talking about effects of their illness (e.g.",
"mentioning medications and therapy) and to even self-disclosing the illness.",
"This study represents an analysis of language use in users who share their mental illness though social media, in this case depression and PTSD.",
"We advocate adjusting for important underlying demographic factors, such as age and gender, to avoid confounding by language specific to these underlying characteristics.",
"The age and gender trends from the U.S. population are present in our dataset, although imperfectly, given the biases of self-reports and social media sampling.",
"Our differential language analyses show symptoms associated with these illnesses congruent with existing clinical theory and consequences of diagnoses.",
"In addition to age and gender, we focus on the important role of inferred personality in predicting 21 mental illness.",
"We show that a model which uses only the text-predicted user level 'Big Five' personality dimensions plus age and gender perform with high accuracy, comparable to methods that use standard dictionaries of psychology as features.",
"Users who self-report a diagnosis appear more neurotic and more introverted when compared to average users.",
"Data We use a dataset of Twitter users reported to suffer from a mental illness, specifically depression and post traumatic stress disorder (PTSD).",
"This dataset was first introduced in (Coppersmith et al., 2014a) .",
"The self-reports are collected by searching a large Twitter archive for disclosures using a regular expression (e.g.",
"'I have been diagnosed with depression').",
"Candidate users were filtered manually and then all their most recent tweets have been continuously crawled using the Twitter Search API.",
"The selfdisclosure messages were excluded from the dataset and from the estimation of user inferred demographics and personality scores.",
"The control users were selected at random from Twitter.",
"In total there are 370 users diagnosed only with PTSD, 483 only with depression and 1104 control users.",
"On average, each user has 3400.8 messages.",
"As Coppersmith et al.",
"(2014b) acknowledge, this method of collection is susceptible to multiple biases, but represents a simple way to build a large dataset of users and their textual information.",
"Features We use the Twitter posts of a user to infer several user traits which we expect to be relevant to mental illnesses based on standard clinical criteria (American Psychiatric Association, 2013).",
"Recently, automatic user profiling methods have used on usergenerated text and complementary features in order to predict different user traits such as: age (Nguyen et al., 2011) , gender (Sap et al., 2014) , location (Cheng et al., 2010) , impact (Lampos et al., 2014) , political preference (Volkova et al., 2014) , temporal orientation or personality (Schwartz et al., 2013) .",
"Age, Gender and Personality We use the methods developed in (Schwartz et al., 2013) to assign each user scores for age, gender and personality from the popular five factor model of personality -'Big Five ' -(McCrae and John, 1992) , which consists of five dimensions: extraversion, agreeableness, conscientiousness, neuroticism and openness to experience.",
"The model was trained on a large sample of around 70,000 Facebook users who have taken Big Five personality tests and shared their posts using a model using 1-3 grams and topics as features Schwartz et al., 2013) .",
"This model achieves R > .3 predictive performance for all five traits.",
"This dataset is also used to obtain age and gender adjusted personality and topic distributions.",
"Affect and Intensity Emotions play an important role in the diagnosis of mental illness (American Psychiatric Association, 2013) .",
"We aim to capture the expression of users' emotions through their generated posts.",
"We characterize expressions along the dimensions of affect (from positive to negative) and intensity (from low to high), which correspond to the two primary axes of the circumplex model, a well-established system for describing emotional states (Posner et al., 2005) .",
"Machine learning approaches perform significantly better at quantifying emotion/sentiment from text compared to lexicon-based methods (Pang and Lee, 2008) .",
"Emotions are expressed at message-level.",
"Consequently, we trained a text classification model on 3,000 Facebook posts labeled by affect and intensity using unigrams as features.",
"We applied this model on each user's posts and aggregated over them to obtain a user score for both dimensions.",
"Textual Features For our qualitative text analysis we extract textual features from all of a user's Twitter posts.",
"Traditional psychological studies use a closed-vocabulary approach to modelling text.",
"The most popular method is based on Linguistic Inquiry and Word Count (LIWC) .",
"In LIWC, psychological theory was used to build 64 different categories.",
"These include different parts-of-speech, topical categories and emotions.",
"Each user is thereby represented as a distribution over these categories.",
"We also use all frequent 1-3 grams (used by more than 10% of users in our dataset), where we use pointwise mutual information (PMI) to filter infrequent 2-3 grams.",
"For a better qualitative assessment and to reduce risk of overfitting, we use a set of topics as a form of dimensionality reduction.",
"We use the 2,000 clusters introduced in (Schwartz et al., 2013) obtained by applying Latent Dirichlet Allocation (Blei et al., 2003) , the most popular topic model, to a large set of Facebook posts.",
"Prediction In this section we present an analysis of the predictive power of inferred user-level features.",
"We use the methods introduced in Section 3 to predict nine user level scores: age, gender, affect, intensity and the Big Five personality traits.",
"The three populations in our dataset are used to formulate three binary classification problems in order to analyse specific pairwise group peculiarities.",
"Users having both PTSD and depression are held-out when classifying between these two classes.",
"To assess the power of our text-derived features, we use as features broader textual features such as the LIWC categories, the LDA inferred topics and frequent 1-3 grams.",
"We train binary logistic regression classifiers (Pedregosa et al., 2011) with Elastic Net regularisation (Zou and Hastie, 2005) .",
"In Table 1 we report the performance using 10-fold cross-validation.",
"Performance is measured using ROC area under the curve (ROC AUC), an adequate measure when the classes are imbalanced.",
"A more thorough study of predictive performance for identifying PTSD and depressed users is presented in (Preoţiuc-Pietro et al., 2015) .",
"Our results show the following: • Age alone improves over chance and is highly predictive when classifying PTSD users.",
"To visualise the effect of age, Figure 1 shows the probability density function in our three populations.",
"This highlights that PTSD users are consistently predicted older than both controls and depressed users.",
"This is in line with findings from the National Comorbidity Survey and replications (Kessler et al., 2005a ; Kessler et al., Figure 1 : Age density functions for each group.",
"• Gender is only weakly predictive of any mental illness, although significantly above chance in depressed vs. controls (p < .01, DeLong test 1 ).",
"Interestingly, in this task age and gender combined improve significantly above each individual prediction, illustrating they contain complementary information.",
"Consequently, at least when analysing depression, gender should be accounted for in addition to age.",
"• Personality alone obtains very good predictive accuracies, reaching over .8 ROC AUC for classifying depressed vs. PTSD.",
"In general, personality features alone perform with strong predictive accuracy, within .1 of >5000 unigram features or 2000 topics.",
"Adding age and gender information further improves predictive power (C-P p < .01, D-P p < .01, DeLong test) when PTSD is one of the compared groups.",
"In Figure 2 we show the mean personality scores across the three groups.",
"In this dataset, PTSD users score highest on average in openness with depressed users scoring lowest.",
"However, neuroticism is the largest separator between mentally ill users and the controls, with depressed having slightly higher levels of neuroticism than PTSD.",
"Neuroticism alone has an ROC AUC of .732 in prediction depression vs. control and .674 in predicting PTSD vs. control.",
"Controls score higher on extraversion, a trait related to the frequency and intensity of positive emotions (Smillie et al., 2012) .",
"Controlling for age (Figure 2b ) significantly reduces the initial association between PTSD and higher conscientiousness, because PTSD users are likely to be older, and conscientiousness tends to increase with age (Soto et al., 2011) .",
"After controlling, depressed users score lowest on conscientiousness, while PTSD and controls are close to each other.",
"• Average affect and intensity achieve modest predictive performance, although significant (C-D p < .001, D-P p < .001, DeLong test) when one of the compared groups are depressed.",
"We use the two features to map users to the emotion circumplex in Figure 3 .",
"On average, control users expressed both higher intensity and higher (i.e.",
"more positive) affect, while depressed users were lowest on both.",
"This is consistent with the lowered (i.e.",
"more negative) affect typically seen in both PTSD and depressed patients, and the increased intensity/arousal among PTSD users may correspond to more frequent expressions of anxiety, which is characterized by high arousal and lower/negative affect (American Psychiatric Association, 2013).",
"• Textual features obtain high predictive performance.",
"Out of these, LIWC performs the worst, while the topics, unigrams and 1-3 grams have similarly high performance.",
"In addition to ROC AUC scores, we present ROC curves for all three binary prediction tasks in Figures 4a, 4b and 4c .",
"ROC curves are specifically useful for medical practitioners because the classification threshold can be adjusted to choose an applicationappropriate level of false positives.",
"For comparison, we display methods using only age and gender; age, gender and personality combined, as well as LIWC and the LDA topics.",
"For classifying depressed users from controls, a true positive rate of ∼ 0.6 can be achieved at a false positive rate of ∼ 0.2 using personality, age and gender alone, with an increase to up to ∼ 0.7 when PTSD users are one of the groups.",
"When classifying PTSD users, age is the most important factor.",
"Separating between depressed and PTSD is almost exclusively a factor of age.",
"This suggests that a application in a real life scenario will likely overpredict older users to have PTSD.",
"Language Analysis The very high predictive power of the user-level features and textual features motivates us to analyse the linguistic features associated with each group, taking into account age and gender.",
"We study differences in language between groups using differential language analysis -DLA (Schwartz et al., 2013) .",
"This method aims to find all the most discriminative features between two groups by correlating each individual feature (1-3 gram or topic) to the class label.",
"In our case, age and gender are included as covariates in order to control for the effect they may have on the outcome.",
"Since a large number of features are explored, we consider coefficients significant if they meet a Bonferroni-corrected two-tailed p-value of less than 0.001.",
"Language of Depression The word cloud in Figure 5a displays the 1-3 grams that most distinguish the depressed users from the set of control users.",
"Many features show face validity (e.g.",
"'depressed'), but also appear to represent a number of the cognitive and emotional processes implicated in depression in the literature (American Psychiatric Association, 2013).",
"1-3 grams seem to disclose information relating to illness and illness management (e.g.",
"'depressed', 'illness', 'meds', 'pills', 'therapy').",
"In some of the most strongly correlated features we also observe an increased focus on the self (e.g.",
"'I', 'I am', 'I have', 'I haven't', 'I was', 'myself') which has been found to accompany depression in many studies and often accompanies states of psychological distress (Rude et al., 2004; Stirman and Pennebaker, 2001; Bucci and Freedman, 1981) .",
"Depression classically relies on the presence of two sets of core symptoms: sustained periods of low mood (dysphoria) and low interest (anhedonia) (American Psychiatric Association, 2013) .",
"Phrases such as 'cry' and 'crying' suggest low mood, while 'anymore' and 'I used to' may suggest a discontinuation of activities.",
"Suicidal ideations or more general thoughts of death and dying are symptoms used in the diagnosis of depression, and even though they are relatively rarely mentioned (grey color), are identified in the differential language analysis (e.g.",
"'suicide', 'to die').",
"Beyond what is generally thought of as the key symptoms of depression discussed above, the differential language analysis also suggests that anger and interpersonal hostility ('fucking') feature significantly in the language use of depressed users.",
"The 10 topics most associated with depression (correlation values ranging from R = .282 to R = .229) suggest similar themes, including dysphoria (e.g.",
"'lonely', 'sad', 'crying' -Figures 6b, 6c, 6f ) and thoughts of death (e.g.",
"'suicide' - Figure 6h ).",
"Figure 5 : The word clouds show the 1-3 grams most correlated with each group having a mental illness, with the set of control users serving as the contrastive set in both cases.",
"The size of the 1-3 gram is scaled by the correlation to binary depression label (point-biserial correlation).",
"The color indexes relative frequency, from grey (rarely used) through blue (moderately used) to red (frequently used).",
"Correlations are controlled for age and gender.",
"Language of PTSD The word cloud in Figure 5b and topic clouds in Figure 7 display the 1-3 grams and topics most correlated with PTSD, with topic correlation values ranging from R = .280 to R = .237.",
"On the whole, the language most predictive of PTSD does not map as cleanly onto the symptoms and criteria for diagnosis of PTSD as was the case with depression.",
"Across topics and 1-3 grams, the language most correlated with PTSD suggests 'depression', disease management (e.g.",
"'pain', 'pills', 'meds' - Figure 7c ) and a focus on the self (e.g.",
"'I had', 'I was', 'I am', 'I would').",
"Similarly, language is suggestive of death (e.g.",
"'suicide', 'suicidal').",
"Compared to the language of depressed users, themes within the language of users with PTSD appear to reference traumatic experiences that are required for a diagnosis of PTSD (e.g.",
"'murdered', 'died'), as well as the resultant states of fear-like psychological distress (e.g.",
"'terrified', 'anxiety').",
"PTSD and Depression From our predictive experiments and Figure 4c , we see that language-predicted age almost completely differentiates between PTSD and depressed users.",
"Consequently, we find only a few features that distinguish between the two groups when controlling for age.",
"To visualise differences between the diseases we visualize topic usage in both groups in Figure 8 .",
"This shows standardised usage in both groups for each topic.",
"As an additional factor (color), we include Figure 6 : The LDA topics most correlated with depression controlling for age and gender, with the set of control users serving as the contrastive set.",
"Word size is proportional to the probability of the word within the topics.",
"Color is for display only.",
"Figure 7 : The LDA topics most correlated with PTSD controlling for age and gender, with the set of control users serving as the contrastive set.",
"Word size is proportional to the probability of the word within the topics.",
"Color is for display only.",
"the personality trait of neuroticism.",
"This plays the most important role in separating between mentally ill users and controls.",
"The topics marked by arrows in Figure 8 are some of the topics most used by users with depression and PTSD shown above in Figures 6-7 .",
"Of the three topics, the topic shown in Figure 6h has 'suicide' as the most prevalent word.",
"This topic's use is elevated for both depression and PTSD.",
"Figure 6f shows a topic used mostly by depressed users, while Figure 7c highlights a topic used mainly by users with PTSD.",
"Related Work Prior studies have similarly examined the efficacy of utilising social media data, like Facebook and Twitter, to ascertain the presence of both depression and PTSD.",
"For instance, Coppersmith et al.",
"(2014b) analyse differences in patterns of language use.",
"They report that individuals with PTSD were significantly more likely to use third person pronouns and significantly less likely to use second person pronouns, without mentioning differences in the use of first person pronouns.",
"This is in contrast to the strong differences in first person pronoun use among depressed individuals documented in the literature ( Rude et al., 2004; Stirman and Pennebaker, 2001) , confirmed in prior Twitter studies (Coppersmith et al., 2014a; De Choudhury et al., 2013) and replicated here.",
"De Choudhury et al.",
"(2013) explore the relationships between social media postings and depressive status, finding that geographic variables can alter one's risk.",
"They show that cities for which the highest numbers of depressive Twitter users are predicted correlate with the cities with the known highest depression rates nationwide; depressive tweets follow an expected diurnal and annual rhythm (peaking at night and during winter); and women exhibit an increased risk of depression relative to men, consistent with known psychological trends.",
"These studies thus demonstrate the utility of using social media outlets to capture nuanced data about an individual's daily psychological affect to predict pathology, and suggest that geographic and demographic factors may alter the prevalence of psychological ill-being.",
"The present study is unique in its efforts to control for some of these demographic factors, such as personality and age, that demonstrably influence an individual's pattern of language use.",
"Further, these demographic characteristics are known to significantly alter patterns e.g.",
"pronoun use (Pennebaker, 2011) .",
"This highlights the utility of controlling for these factors when analysing pathological states like depression or PTSD.",
"Conclusions This study presented a qualitative analysis of mental illness language use in users who disclosed their diagnoses.",
"For users diagnosed with depression or PTSD, we have identified both symptoms and effects of their mental condition from user-generated content.",
"The majority of our results map to clinical theory, confirming the validity of our methodology and the relevance of the dataset.",
"In our experiments, we accounted for text-derived user features, such as demographics (e.g.",
"age, gender) and personality.",
"Text-derived personality alone showed high predictive performance, in one case reaching similar performance to using orders of magnitude more textual features.",
"Our study further demonstrated the potential for using social media as a means for predicting and analysing the linguistic markers of mental illnesses.",
"However, it also raises a few questions.",
"First, although apparently easily predictable, the difference between depressed and PTSD users is largely only due to predicted age.",
"Sample demographics also appear to be different than the general population, making predictive models fitted on this data to be susceptible to over-predicting certain demographics.",
"Secondly, the language associated with a selfreported diagnosis of depression and PTSD has a large overlap with the language predictive of personality.",
"This suggests that personality may be explanatory of a particular kind of behavior: posting about mental illness diagnoses online.",
"The mental illness labels thus acquired likely have personality confounds 'baked into them', stressing the need for using stronger ground truth such as given by clinicians.",
"Further, based on the scope of the applicationswhether screening or analysis of psychological risk factors -user-generated data should at minimum be temporally partitioned to encompass content shared before and after the diagnosis.",
"This allows one to separate mentions of symptoms from discussions of and consequences of their diagnosis, such as the use of medications.",
"28"
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"5",
"5.1",
"5.2",
"5.3",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Data",
"Features",
"Age, Gender and Personality",
"Affect and Intensity",
"Textual Features",
"Prediction",
"Language Analysis",
"Language of Depression",
"Language of PTSD",
"PTSD and Depression",
"Related Work",
"Conclusions"
]
} | GEM-SciDuet-train-27#paper-1028#slide-1 | Data | I have been diagnosed with depression
each user has avg. 3400 messages | I have been diagnosed with depression
each user has avg. 3400 messages | [] |
GEM-SciDuet-train-27#paper-1028#slide-2 | 1028 | Proceedings of the 2nd Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal The Role of Personality, Age and Gender in Tweeting about Mental Illnesses | Mental illnesses, such as depression and post traumatic stress disorder (PTSD), are highly underdiagnosed globally. Populations sharing similar demographics and personality traits are known to be more at risk than others. In this study, we characterise the language use of users disclosing their mental illness on Twitter. Language-derived personality and demographic estimates show surprisingly strong performance in distinguishing users that tweet a diagnosis of depression or PTSD from random controls, reaching an area under the receiveroperating characteristic curve -AUC -of around .8 in all our binary classification tasks. In fact, when distinguishing users disclosing depression from those disclosing PTSD, the single feature of estimated age shows nearly as strong performance (AUC = .806) as using thousands of topics (AUC = .819) or tens of thousands of n-grams (AUC = .812). We also find that differential language analyses, controlled for demographics, recover many symptoms associated with the mental illnesses in the clinical literature. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169
],
"paper_content_text": [
"Introduction Mental illnesses, such as depression and post traumatic stress disorder (PTSD) represent a large share of the global burden of disease (Üstün et al., 2004; Mathers and Loncar, 2006) , but are underdiagnosed and undertreated around the world (Prince et al., 2007) .",
"Previous research has demonstrated the important role of demographic factors in depression risk.",
"For example, while clinically-assessed depression is estimated at 6.6% in a 12-month interval for U.S. adults , the prevalence in males is 3-5%, while the prevalence is 8-10% in females (Andrade et al., 2003) .",
"Similarly, prevalence of PTSD among U.S. adults in any 12-month period is estimated at 3.5% (Kessler et al., 2005b ) -1.8% in males and 5.2% in females -yet this risk is not distributed evenly across age groups; prevalence of PTSD increases throughout the majority of the lifespan to reach a peak of 9.2% between the ages of 49-59, before dropping sharply to 2.5% past the age of 60.",
"(Kessler et al., 2005a) .",
"Large scale user-generated content provides the opportunity to extract information not only about events, but also about the person posting them.",
"Using automatic methods, a wide set of user characteristics, such as age, gender, personality, location and income have been shown to be predictable from shared social media text.",
"The same holds for mental illnesses, from users expressing symptoms of their illness (e.g.",
"low mood, focus on the self, high anxiety) to talking about effects of their illness (e.g.",
"mentioning medications and therapy) and to even self-disclosing the illness.",
"This study represents an analysis of language use in users who share their mental illness though social media, in this case depression and PTSD.",
"We advocate adjusting for important underlying demographic factors, such as age and gender, to avoid confounding by language specific to these underlying characteristics.",
"The age and gender trends from the U.S. population are present in our dataset, although imperfectly, given the biases of self-reports and social media sampling.",
"Our differential language analyses show symptoms associated with these illnesses congruent with existing clinical theory and consequences of diagnoses.",
"In addition to age and gender, we focus on the important role of inferred personality in predicting 21 mental illness.",
"We show that a model which uses only the text-predicted user level 'Big Five' personality dimensions plus age and gender perform with high accuracy, comparable to methods that use standard dictionaries of psychology as features.",
"Users who self-report a diagnosis appear more neurotic and more introverted when compared to average users.",
"Data We use a dataset of Twitter users reported to suffer from a mental illness, specifically depression and post traumatic stress disorder (PTSD).",
"This dataset was first introduced in (Coppersmith et al., 2014a) .",
"The self-reports are collected by searching a large Twitter archive for disclosures using a regular expression (e.g.",
"'I have been diagnosed with depression').",
"Candidate users were filtered manually and then all their most recent tweets have been continuously crawled using the Twitter Search API.",
"The selfdisclosure messages were excluded from the dataset and from the estimation of user inferred demographics and personality scores.",
"The control users were selected at random from Twitter.",
"In total there are 370 users diagnosed only with PTSD, 483 only with depression and 1104 control users.",
"On average, each user has 3400.8 messages.",
"As Coppersmith et al.",
"(2014b) acknowledge, this method of collection is susceptible to multiple biases, but represents a simple way to build a large dataset of users and their textual information.",
"Features We use the Twitter posts of a user to infer several user traits which we expect to be relevant to mental illnesses based on standard clinical criteria (American Psychiatric Association, 2013).",
"Recently, automatic user profiling methods have used on usergenerated text and complementary features in order to predict different user traits such as: age (Nguyen et al., 2011) , gender (Sap et al., 2014) , location (Cheng et al., 2010) , impact (Lampos et al., 2014) , political preference (Volkova et al., 2014) , temporal orientation or personality (Schwartz et al., 2013) .",
"Age, Gender and Personality We use the methods developed in (Schwartz et al., 2013) to assign each user scores for age, gender and personality from the popular five factor model of personality -'Big Five ' -(McCrae and John, 1992) , which consists of five dimensions: extraversion, agreeableness, conscientiousness, neuroticism and openness to experience.",
"The model was trained on a large sample of around 70,000 Facebook users who have taken Big Five personality tests and shared their posts using a model using 1-3 grams and topics as features Schwartz et al., 2013) .",
"This model achieves R > .3 predictive performance for all five traits.",
"This dataset is also used to obtain age and gender adjusted personality and topic distributions.",
"Affect and Intensity Emotions play an important role in the diagnosis of mental illness (American Psychiatric Association, 2013) .",
"We aim to capture the expression of users' emotions through their generated posts.",
"We characterize expressions along the dimensions of affect (from positive to negative) and intensity (from low to high), which correspond to the two primary axes of the circumplex model, a well-established system for describing emotional states (Posner et al., 2005) .",
"Machine learning approaches perform significantly better at quantifying emotion/sentiment from text compared to lexicon-based methods (Pang and Lee, 2008) .",
"Emotions are expressed at message-level.",
"Consequently, we trained a text classification model on 3,000 Facebook posts labeled by affect and intensity using unigrams as features.",
"We applied this model on each user's posts and aggregated over them to obtain a user score for both dimensions.",
"Textual Features For our qualitative text analysis we extract textual features from all of a user's Twitter posts.",
"Traditional psychological studies use a closed-vocabulary approach to modelling text.",
"The most popular method is based on Linguistic Inquiry and Word Count (LIWC) .",
"In LIWC, psychological theory was used to build 64 different categories.",
"These include different parts-of-speech, topical categories and emotions.",
"Each user is thereby represented as a distribution over these categories.",
"We also use all frequent 1-3 grams (used by more than 10% of users in our dataset), where we use pointwise mutual information (PMI) to filter infrequent 2-3 grams.",
"For a better qualitative assessment and to reduce risk of overfitting, we use a set of topics as a form of dimensionality reduction.",
"We use the 2,000 clusters introduced in (Schwartz et al., 2013) obtained by applying Latent Dirichlet Allocation (Blei et al., 2003) , the most popular topic model, to a large set of Facebook posts.",
"Prediction In this section we present an analysis of the predictive power of inferred user-level features.",
"We use the methods introduced in Section 3 to predict nine user level scores: age, gender, affect, intensity and the Big Five personality traits.",
"The three populations in our dataset are used to formulate three binary classification problems in order to analyse specific pairwise group peculiarities.",
"Users having both PTSD and depression are held-out when classifying between these two classes.",
"To assess the power of our text-derived features, we use as features broader textual features such as the LIWC categories, the LDA inferred topics and frequent 1-3 grams.",
"We train binary logistic regression classifiers (Pedregosa et al., 2011) with Elastic Net regularisation (Zou and Hastie, 2005) .",
"In Table 1 we report the performance using 10-fold cross-validation.",
"Performance is measured using ROC area under the curve (ROC AUC), an adequate measure when the classes are imbalanced.",
"A more thorough study of predictive performance for identifying PTSD and depressed users is presented in (Preoţiuc-Pietro et al., 2015) .",
"Our results show the following: • Age alone improves over chance and is highly predictive when classifying PTSD users.",
"To visualise the effect of age, Figure 1 shows the probability density function in our three populations.",
"This highlights that PTSD users are consistently predicted older than both controls and depressed users.",
"This is in line with findings from the National Comorbidity Survey and replications (Kessler et al., 2005a ; Kessler et al., Figure 1 : Age density functions for each group.",
"• Gender is only weakly predictive of any mental illness, although significantly above chance in depressed vs. controls (p < .01, DeLong test 1 ).",
"Interestingly, in this task age and gender combined improve significantly above each individual prediction, illustrating they contain complementary information.",
"Consequently, at least when analysing depression, gender should be accounted for in addition to age.",
"• Personality alone obtains very good predictive accuracies, reaching over .8 ROC AUC for classifying depressed vs. PTSD.",
"In general, personality features alone perform with strong predictive accuracy, within .1 of >5000 unigram features or 2000 topics.",
"Adding age and gender information further improves predictive power (C-P p < .01, D-P p < .01, DeLong test) when PTSD is one of the compared groups.",
"In Figure 2 we show the mean personality scores across the three groups.",
"In this dataset, PTSD users score highest on average in openness with depressed users scoring lowest.",
"However, neuroticism is the largest separator between mentally ill users and the controls, with depressed having slightly higher levels of neuroticism than PTSD.",
"Neuroticism alone has an ROC AUC of .732 in prediction depression vs. control and .674 in predicting PTSD vs. control.",
"Controls score higher on extraversion, a trait related to the frequency and intensity of positive emotions (Smillie et al., 2012) .",
"Controlling for age (Figure 2b ) significantly reduces the initial association between PTSD and higher conscientiousness, because PTSD users are likely to be older, and conscientiousness tends to increase with age (Soto et al., 2011) .",
"After controlling, depressed users score lowest on conscientiousness, while PTSD and controls are close to each other.",
"• Average affect and intensity achieve modest predictive performance, although significant (C-D p < .001, D-P p < .001, DeLong test) when one of the compared groups are depressed.",
"We use the two features to map users to the emotion circumplex in Figure 3 .",
"On average, control users expressed both higher intensity and higher (i.e.",
"more positive) affect, while depressed users were lowest on both.",
"This is consistent with the lowered (i.e.",
"more negative) affect typically seen in both PTSD and depressed patients, and the increased intensity/arousal among PTSD users may correspond to more frequent expressions of anxiety, which is characterized by high arousal and lower/negative affect (American Psychiatric Association, 2013).",
"• Textual features obtain high predictive performance.",
"Out of these, LIWC performs the worst, while the topics, unigrams and 1-3 grams have similarly high performance.",
"In addition to ROC AUC scores, we present ROC curves for all three binary prediction tasks in Figures 4a, 4b and 4c .",
"ROC curves are specifically useful for medical practitioners because the classification threshold can be adjusted to choose an applicationappropriate level of false positives.",
"For comparison, we display methods using only age and gender; age, gender and personality combined, as well as LIWC and the LDA topics.",
"For classifying depressed users from controls, a true positive rate of ∼ 0.6 can be achieved at a false positive rate of ∼ 0.2 using personality, age and gender alone, with an increase to up to ∼ 0.7 when PTSD users are one of the groups.",
"When classifying PTSD users, age is the most important factor.",
"Separating between depressed and PTSD is almost exclusively a factor of age.",
"This suggests that a application in a real life scenario will likely overpredict older users to have PTSD.",
"Language Analysis The very high predictive power of the user-level features and textual features motivates us to analyse the linguistic features associated with each group, taking into account age and gender.",
"We study differences in language between groups using differential language analysis -DLA (Schwartz et al., 2013) .",
"This method aims to find all the most discriminative features between two groups by correlating each individual feature (1-3 gram or topic) to the class label.",
"In our case, age and gender are included as covariates in order to control for the effect they may have on the outcome.",
"Since a large number of features are explored, we consider coefficients significant if they meet a Bonferroni-corrected two-tailed p-value of less than 0.001.",
"Language of Depression The word cloud in Figure 5a displays the 1-3 grams that most distinguish the depressed users from the set of control users.",
"Many features show face validity (e.g.",
"'depressed'), but also appear to represent a number of the cognitive and emotional processes implicated in depression in the literature (American Psychiatric Association, 2013).",
"1-3 grams seem to disclose information relating to illness and illness management (e.g.",
"'depressed', 'illness', 'meds', 'pills', 'therapy').",
"In some of the most strongly correlated features we also observe an increased focus on the self (e.g.",
"'I', 'I am', 'I have', 'I haven't', 'I was', 'myself') which has been found to accompany depression in many studies and often accompanies states of psychological distress (Rude et al., 2004; Stirman and Pennebaker, 2001; Bucci and Freedman, 1981) .",
"Depression classically relies on the presence of two sets of core symptoms: sustained periods of low mood (dysphoria) and low interest (anhedonia) (American Psychiatric Association, 2013) .",
"Phrases such as 'cry' and 'crying' suggest low mood, while 'anymore' and 'I used to' may suggest a discontinuation of activities.",
"Suicidal ideations or more general thoughts of death and dying are symptoms used in the diagnosis of depression, and even though they are relatively rarely mentioned (grey color), are identified in the differential language analysis (e.g.",
"'suicide', 'to die').",
"Beyond what is generally thought of as the key symptoms of depression discussed above, the differential language analysis also suggests that anger and interpersonal hostility ('fucking') feature significantly in the language use of depressed users.",
"The 10 topics most associated with depression (correlation values ranging from R = .282 to R = .229) suggest similar themes, including dysphoria (e.g.",
"'lonely', 'sad', 'crying' -Figures 6b, 6c, 6f ) and thoughts of death (e.g.",
"'suicide' - Figure 6h ).",
"Figure 5 : The word clouds show the 1-3 grams most correlated with each group having a mental illness, with the set of control users serving as the contrastive set in both cases.",
"The size of the 1-3 gram is scaled by the correlation to binary depression label (point-biserial correlation).",
"The color indexes relative frequency, from grey (rarely used) through blue (moderately used) to red (frequently used).",
"Correlations are controlled for age and gender.",
"Language of PTSD The word cloud in Figure 5b and topic clouds in Figure 7 display the 1-3 grams and topics most correlated with PTSD, with topic correlation values ranging from R = .280 to R = .237.",
"On the whole, the language most predictive of PTSD does not map as cleanly onto the symptoms and criteria for diagnosis of PTSD as was the case with depression.",
"Across topics and 1-3 grams, the language most correlated with PTSD suggests 'depression', disease management (e.g.",
"'pain', 'pills', 'meds' - Figure 7c ) and a focus on the self (e.g.",
"'I had', 'I was', 'I am', 'I would').",
"Similarly, language is suggestive of death (e.g.",
"'suicide', 'suicidal').",
"Compared to the language of depressed users, themes within the language of users with PTSD appear to reference traumatic experiences that are required for a diagnosis of PTSD (e.g.",
"'murdered', 'died'), as well as the resultant states of fear-like psychological distress (e.g.",
"'terrified', 'anxiety').",
"PTSD and Depression From our predictive experiments and Figure 4c , we see that language-predicted age almost completely differentiates between PTSD and depressed users.",
"Consequently, we find only a few features that distinguish between the two groups when controlling for age.",
"To visualise differences between the diseases we visualize topic usage in both groups in Figure 8 .",
"This shows standardised usage in both groups for each topic.",
"As an additional factor (color), we include Figure 6 : The LDA topics most correlated with depression controlling for age and gender, with the set of control users serving as the contrastive set.",
"Word size is proportional to the probability of the word within the topics.",
"Color is for display only.",
"Figure 7 : The LDA topics most correlated with PTSD controlling for age and gender, with the set of control users serving as the contrastive set.",
"Word size is proportional to the probability of the word within the topics.",
"Color is for display only.",
"the personality trait of neuroticism.",
"This plays the most important role in separating between mentally ill users and controls.",
"The topics marked by arrows in Figure 8 are some of the topics most used by users with depression and PTSD shown above in Figures 6-7 .",
"Of the three topics, the topic shown in Figure 6h has 'suicide' as the most prevalent word.",
"This topic's use is elevated for both depression and PTSD.",
"Figure 6f shows a topic used mostly by depressed users, while Figure 7c highlights a topic used mainly by users with PTSD.",
"Related Work Prior studies have similarly examined the efficacy of utilising social media data, like Facebook and Twitter, to ascertain the presence of both depression and PTSD.",
"For instance, Coppersmith et al.",
"(2014b) analyse differences in patterns of language use.",
"They report that individuals with PTSD were significantly more likely to use third person pronouns and significantly less likely to use second person pronouns, without mentioning differences in the use of first person pronouns.",
"This is in contrast to the strong differences in first person pronoun use among depressed individuals documented in the literature ( Rude et al., 2004; Stirman and Pennebaker, 2001) , confirmed in prior Twitter studies (Coppersmith et al., 2014a; De Choudhury et al., 2013) and replicated here.",
"De Choudhury et al.",
"(2013) explore the relationships between social media postings and depressive status, finding that geographic variables can alter one's risk.",
"They show that cities for which the highest numbers of depressive Twitter users are predicted correlate with the cities with the known highest depression rates nationwide; depressive tweets follow an expected diurnal and annual rhythm (peaking at night and during winter); and women exhibit an increased risk of depression relative to men, consistent with known psychological trends.",
"These studies thus demonstrate the utility of using social media outlets to capture nuanced data about an individual's daily psychological affect to predict pathology, and suggest that geographic and demographic factors may alter the prevalence of psychological ill-being.",
"The present study is unique in its efforts to control for some of these demographic factors, such as personality and age, that demonstrably influence an individual's pattern of language use.",
"Further, these demographic characteristics are known to significantly alter patterns e.g.",
"pronoun use (Pennebaker, 2011) .",
"This highlights the utility of controlling for these factors when analysing pathological states like depression or PTSD.",
"Conclusions This study presented a qualitative analysis of mental illness language use in users who disclosed their diagnoses.",
"For users diagnosed with depression or PTSD, we have identified both symptoms and effects of their mental condition from user-generated content.",
"The majority of our results map to clinical theory, confirming the validity of our methodology and the relevance of the dataset.",
"In our experiments, we accounted for text-derived user features, such as demographics (e.g.",
"age, gender) and personality.",
"Text-derived personality alone showed high predictive performance, in one case reaching similar performance to using orders of magnitude more textual features.",
"Our study further demonstrated the potential for using social media as a means for predicting and analysing the linguistic markers of mental illnesses.",
"However, it also raises a few questions.",
"First, although apparently easily predictable, the difference between depressed and PTSD users is largely only due to predicted age.",
"Sample demographics also appear to be different than the general population, making predictive models fitted on this data to be susceptible to over-predicting certain demographics.",
"Secondly, the language associated with a selfreported diagnosis of depression and PTSD has a large overlap with the language predictive of personality.",
"This suggests that personality may be explanatory of a particular kind of behavior: posting about mental illness diagnoses online.",
"The mental illness labels thus acquired likely have personality confounds 'baked into them', stressing the need for using stronger ground truth such as given by clinicians.",
"Further, based on the scope of the applicationswhether screening or analysis of psychological risk factors -user-generated data should at minimum be temporally partitioned to encompass content shared before and after the diagnosis.",
"This allows one to separate mentions of symptoms from discussions of and consequences of their diagnosis, such as the use of medications.",
"28"
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"5",
"5.1",
"5.2",
"5.3",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Data",
"Features",
"Age, Gender and Personality",
"Affect and Intensity",
"Textual Features",
"Prediction",
"Language Analysis",
"Language of Depression",
"Language of PTSD",
"PTSD and Depression",
"Related Work",
"Conclusions"
]
} | GEM-SciDuet-train-27#paper-1028#slide-2 | Study Setup | age, gender, personality ication mental illness | age, gender, personality ication mental illness | [] |
GEM-SciDuet-train-27#paper-1028#slide-3 | 1028 | Proceedings of the 2nd Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal The Role of Personality, Age and Gender in Tweeting about Mental Illnesses | Mental illnesses, such as depression and post traumatic stress disorder (PTSD), are highly underdiagnosed globally. Populations sharing similar demographics and personality traits are known to be more at risk than others. In this study, we characterise the language use of users disclosing their mental illness on Twitter. Language-derived personality and demographic estimates show surprisingly strong performance in distinguishing users that tweet a diagnosis of depression or PTSD from random controls, reaching an area under the receiveroperating characteristic curve -AUC -of around .8 in all our binary classification tasks. In fact, when distinguishing users disclosing depression from those disclosing PTSD, the single feature of estimated age shows nearly as strong performance (AUC = .806) as using thousands of topics (AUC = .819) or tens of thousands of n-grams (AUC = .812). We also find that differential language analyses, controlled for demographics, recover many symptoms associated with the mental illnesses in the clinical literature. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169
],
"paper_content_text": [
"Introduction Mental illnesses, such as depression and post traumatic stress disorder (PTSD) represent a large share of the global burden of disease (Üstün et al., 2004; Mathers and Loncar, 2006) , but are underdiagnosed and undertreated around the world (Prince et al., 2007) .",
"Previous research has demonstrated the important role of demographic factors in depression risk.",
"For example, while clinically-assessed depression is estimated at 6.6% in a 12-month interval for U.S. adults , the prevalence in males is 3-5%, while the prevalence is 8-10% in females (Andrade et al., 2003) .",
"Similarly, prevalence of PTSD among U.S. adults in any 12-month period is estimated at 3.5% (Kessler et al., 2005b ) -1.8% in males and 5.2% in females -yet this risk is not distributed evenly across age groups; prevalence of PTSD increases throughout the majority of the lifespan to reach a peak of 9.2% between the ages of 49-59, before dropping sharply to 2.5% past the age of 60.",
"(Kessler et al., 2005a) .",
"Large scale user-generated content provides the opportunity to extract information not only about events, but also about the person posting them.",
"Using automatic methods, a wide set of user characteristics, such as age, gender, personality, location and income have been shown to be predictable from shared social media text.",
"The same holds for mental illnesses, from users expressing symptoms of their illness (e.g.",
"low mood, focus on the self, high anxiety) to talking about effects of their illness (e.g.",
"mentioning medications and therapy) and to even self-disclosing the illness.",
"This study represents an analysis of language use in users who share their mental illness though social media, in this case depression and PTSD.",
"We advocate adjusting for important underlying demographic factors, such as age and gender, to avoid confounding by language specific to these underlying characteristics.",
"The age and gender trends from the U.S. population are present in our dataset, although imperfectly, given the biases of self-reports and social media sampling.",
"Our differential language analyses show symptoms associated with these illnesses congruent with existing clinical theory and consequences of diagnoses.",
"In addition to age and gender, we focus on the important role of inferred personality in predicting 21 mental illness.",
"We show that a model which uses only the text-predicted user level 'Big Five' personality dimensions plus age and gender perform with high accuracy, comparable to methods that use standard dictionaries of psychology as features.",
"Users who self-report a diagnosis appear more neurotic and more introverted when compared to average users.",
"Data We use a dataset of Twitter users reported to suffer from a mental illness, specifically depression and post traumatic stress disorder (PTSD).",
"This dataset was first introduced in (Coppersmith et al., 2014a) .",
"The self-reports are collected by searching a large Twitter archive for disclosures using a regular expression (e.g.",
"'I have been diagnosed with depression').",
"Candidate users were filtered manually and then all their most recent tweets have been continuously crawled using the Twitter Search API.",
"The selfdisclosure messages were excluded from the dataset and from the estimation of user inferred demographics and personality scores.",
"The control users were selected at random from Twitter.",
"In total there are 370 users diagnosed only with PTSD, 483 only with depression and 1104 control users.",
"On average, each user has 3400.8 messages.",
"As Coppersmith et al.",
"(2014b) acknowledge, this method of collection is susceptible to multiple biases, but represents a simple way to build a large dataset of users and their textual information.",
"Features We use the Twitter posts of a user to infer several user traits which we expect to be relevant to mental illnesses based on standard clinical criteria (American Psychiatric Association, 2013).",
"Recently, automatic user profiling methods have used on usergenerated text and complementary features in order to predict different user traits such as: age (Nguyen et al., 2011) , gender (Sap et al., 2014) , location (Cheng et al., 2010) , impact (Lampos et al., 2014) , political preference (Volkova et al., 2014) , temporal orientation or personality (Schwartz et al., 2013) .",
"Age, Gender and Personality We use the methods developed in (Schwartz et al., 2013) to assign each user scores for age, gender and personality from the popular five factor model of personality -'Big Five ' -(McCrae and John, 1992) , which consists of five dimensions: extraversion, agreeableness, conscientiousness, neuroticism and openness to experience.",
"The model was trained on a large sample of around 70,000 Facebook users who have taken Big Five personality tests and shared their posts using a model using 1-3 grams and topics as features Schwartz et al., 2013) .",
"This model achieves R > .3 predictive performance for all five traits.",
"This dataset is also used to obtain age and gender adjusted personality and topic distributions.",
"Affect and Intensity Emotions play an important role in the diagnosis of mental illness (American Psychiatric Association, 2013) .",
"We aim to capture the expression of users' emotions through their generated posts.",
"We characterize expressions along the dimensions of affect (from positive to negative) and intensity (from low to high), which correspond to the two primary axes of the circumplex model, a well-established system for describing emotional states (Posner et al., 2005) .",
"Machine learning approaches perform significantly better at quantifying emotion/sentiment from text compared to lexicon-based methods (Pang and Lee, 2008) .",
"Emotions are expressed at message-level.",
"Consequently, we trained a text classification model on 3,000 Facebook posts labeled by affect and intensity using unigrams as features.",
"We applied this model on each user's posts and aggregated over them to obtain a user score for both dimensions.",
"Textual Features For our qualitative text analysis we extract textual features from all of a user's Twitter posts.",
"Traditional psychological studies use a closed-vocabulary approach to modelling text.",
"The most popular method is based on Linguistic Inquiry and Word Count (LIWC) .",
"In LIWC, psychological theory was used to build 64 different categories.",
"These include different parts-of-speech, topical categories and emotions.",
"Each user is thereby represented as a distribution over these categories.",
"We also use all frequent 1-3 grams (used by more than 10% of users in our dataset), where we use pointwise mutual information (PMI) to filter infrequent 2-3 grams.",
"For a better qualitative assessment and to reduce risk of overfitting, we use a set of topics as a form of dimensionality reduction.",
"We use the 2,000 clusters introduced in (Schwartz et al., 2013) obtained by applying Latent Dirichlet Allocation (Blei et al., 2003) , the most popular topic model, to a large set of Facebook posts.",
"Prediction In this section we present an analysis of the predictive power of inferred user-level features.",
"We use the methods introduced in Section 3 to predict nine user level scores: age, gender, affect, intensity and the Big Five personality traits.",
"The three populations in our dataset are used to formulate three binary classification problems in order to analyse specific pairwise group peculiarities.",
"Users having both PTSD and depression are held-out when classifying between these two classes.",
"To assess the power of our text-derived features, we use as features broader textual features such as the LIWC categories, the LDA inferred topics and frequent 1-3 grams.",
"We train binary logistic regression classifiers (Pedregosa et al., 2011) with Elastic Net regularisation (Zou and Hastie, 2005) .",
"In Table 1 we report the performance using 10-fold cross-validation.",
"Performance is measured using ROC area under the curve (ROC AUC), an adequate measure when the classes are imbalanced.",
"A more thorough study of predictive performance for identifying PTSD and depressed users is presented in (Preoţiuc-Pietro et al., 2015) .",
"Our results show the following: • Age alone improves over chance and is highly predictive when classifying PTSD users.",
"To visualise the effect of age, Figure 1 shows the probability density function in our three populations.",
"This highlights that PTSD users are consistently predicted older than both controls and depressed users.",
"This is in line with findings from the National Comorbidity Survey and replications (Kessler et al., 2005a ; Kessler et al., Figure 1 : Age density functions for each group.",
"• Gender is only weakly predictive of any mental illness, although significantly above chance in depressed vs. controls (p < .01, DeLong test 1 ).",
"Interestingly, in this task age and gender combined improve significantly above each individual prediction, illustrating they contain complementary information.",
"Consequently, at least when analysing depression, gender should be accounted for in addition to age.",
"• Personality alone obtains very good predictive accuracies, reaching over .8 ROC AUC for classifying depressed vs. PTSD.",
"In general, personality features alone perform with strong predictive accuracy, within .1 of >5000 unigram features or 2000 topics.",
"Adding age and gender information further improves predictive power (C-P p < .01, D-P p < .01, DeLong test) when PTSD is one of the compared groups.",
"In Figure 2 we show the mean personality scores across the three groups.",
"In this dataset, PTSD users score highest on average in openness with depressed users scoring lowest.",
"However, neuroticism is the largest separator between mentally ill users and the controls, with depressed having slightly higher levels of neuroticism than PTSD.",
"Neuroticism alone has an ROC AUC of .732 in prediction depression vs. control and .674 in predicting PTSD vs. control.",
"Controls score higher on extraversion, a trait related to the frequency and intensity of positive emotions (Smillie et al., 2012) .",
"Controlling for age (Figure 2b ) significantly reduces the initial association between PTSD and higher conscientiousness, because PTSD users are likely to be older, and conscientiousness tends to increase with age (Soto et al., 2011) .",
"After controlling, depressed users score lowest on conscientiousness, while PTSD and controls are close to each other.",
"• Average affect and intensity achieve modest predictive performance, although significant (C-D p < .001, D-P p < .001, DeLong test) when one of the compared groups are depressed.",
"We use the two features to map users to the emotion circumplex in Figure 3 .",
"On average, control users expressed both higher intensity and higher (i.e.",
"more positive) affect, while depressed users were lowest on both.",
"This is consistent with the lowered (i.e.",
"more negative) affect typically seen in both PTSD and depressed patients, and the increased intensity/arousal among PTSD users may correspond to more frequent expressions of anxiety, which is characterized by high arousal and lower/negative affect (American Psychiatric Association, 2013).",
"• Textual features obtain high predictive performance.",
"Out of these, LIWC performs the worst, while the topics, unigrams and 1-3 grams have similarly high performance.",
"In addition to ROC AUC scores, we present ROC curves for all three binary prediction tasks in Figures 4a, 4b and 4c .",
"ROC curves are specifically useful for medical practitioners because the classification threshold can be adjusted to choose an applicationappropriate level of false positives.",
"For comparison, we display methods using only age and gender; age, gender and personality combined, as well as LIWC and the LDA topics.",
"For classifying depressed users from controls, a true positive rate of ∼ 0.6 can be achieved at a false positive rate of ∼ 0.2 using personality, age and gender alone, with an increase to up to ∼ 0.7 when PTSD users are one of the groups.",
"When classifying PTSD users, age is the most important factor.",
"Separating between depressed and PTSD is almost exclusively a factor of age.",
"This suggests that a application in a real life scenario will likely overpredict older users to have PTSD.",
"Language Analysis The very high predictive power of the user-level features and textual features motivates us to analyse the linguistic features associated with each group, taking into account age and gender.",
"We study differences in language between groups using differential language analysis -DLA (Schwartz et al., 2013) .",
"This method aims to find all the most discriminative features between two groups by correlating each individual feature (1-3 gram or topic) to the class label.",
"In our case, age and gender are included as covariates in order to control for the effect they may have on the outcome.",
"Since a large number of features are explored, we consider coefficients significant if they meet a Bonferroni-corrected two-tailed p-value of less than 0.001.",
"Language of Depression The word cloud in Figure 5a displays the 1-3 grams that most distinguish the depressed users from the set of control users.",
"Many features show face validity (e.g.",
"'depressed'), but also appear to represent a number of the cognitive and emotional processes implicated in depression in the literature (American Psychiatric Association, 2013).",
"1-3 grams seem to disclose information relating to illness and illness management (e.g.",
"'depressed', 'illness', 'meds', 'pills', 'therapy').",
"In some of the most strongly correlated features we also observe an increased focus on the self (e.g.",
"'I', 'I am', 'I have', 'I haven't', 'I was', 'myself') which has been found to accompany depression in many studies and often accompanies states of psychological distress (Rude et al., 2004; Stirman and Pennebaker, 2001; Bucci and Freedman, 1981) .",
"Depression classically relies on the presence of two sets of core symptoms: sustained periods of low mood (dysphoria) and low interest (anhedonia) (American Psychiatric Association, 2013) .",
"Phrases such as 'cry' and 'crying' suggest low mood, while 'anymore' and 'I used to' may suggest a discontinuation of activities.",
"Suicidal ideations or more general thoughts of death and dying are symptoms used in the diagnosis of depression, and even though they are relatively rarely mentioned (grey color), are identified in the differential language analysis (e.g.",
"'suicide', 'to die').",
"Beyond what is generally thought of as the key symptoms of depression discussed above, the differential language analysis also suggests that anger and interpersonal hostility ('fucking') feature significantly in the language use of depressed users.",
"The 10 topics most associated with depression (correlation values ranging from R = .282 to R = .229) suggest similar themes, including dysphoria (e.g.",
"'lonely', 'sad', 'crying' -Figures 6b, 6c, 6f ) and thoughts of death (e.g.",
"'suicide' - Figure 6h ).",
"Figure 5 : The word clouds show the 1-3 grams most correlated with each group having a mental illness, with the set of control users serving as the contrastive set in both cases.",
"The size of the 1-3 gram is scaled by the correlation to binary depression label (point-biserial correlation).",
"The color indexes relative frequency, from grey (rarely used) through blue (moderately used) to red (frequently used).",
"Correlations are controlled for age and gender.",
"Language of PTSD The word cloud in Figure 5b and topic clouds in Figure 7 display the 1-3 grams and topics most correlated with PTSD, with topic correlation values ranging from R = .280 to R = .237.",
"On the whole, the language most predictive of PTSD does not map as cleanly onto the symptoms and criteria for diagnosis of PTSD as was the case with depression.",
"Across topics and 1-3 grams, the language most correlated with PTSD suggests 'depression', disease management (e.g.",
"'pain', 'pills', 'meds' - Figure 7c ) and a focus on the self (e.g.",
"'I had', 'I was', 'I am', 'I would').",
"Similarly, language is suggestive of death (e.g.",
"'suicide', 'suicidal').",
"Compared to the language of depressed users, themes within the language of users with PTSD appear to reference traumatic experiences that are required for a diagnosis of PTSD (e.g.",
"'murdered', 'died'), as well as the resultant states of fear-like psychological distress (e.g.",
"'terrified', 'anxiety').",
"PTSD and Depression From our predictive experiments and Figure 4c , we see that language-predicted age almost completely differentiates between PTSD and depressed users.",
"Consequently, we find only a few features that distinguish between the two groups when controlling for age.",
"To visualise differences between the diseases we visualize topic usage in both groups in Figure 8 .",
"This shows standardised usage in both groups for each topic.",
"As an additional factor (color), we include Figure 6 : The LDA topics most correlated with depression controlling for age and gender, with the set of control users serving as the contrastive set.",
"Word size is proportional to the probability of the word within the topics.",
"Color is for display only.",
"Figure 7 : The LDA topics most correlated with PTSD controlling for age and gender, with the set of control users serving as the contrastive set.",
"Word size is proportional to the probability of the word within the topics.",
"Color is for display only.",
"the personality trait of neuroticism.",
"This plays the most important role in separating between mentally ill users and controls.",
"The topics marked by arrows in Figure 8 are some of the topics most used by users with depression and PTSD shown above in Figures 6-7 .",
"Of the three topics, the topic shown in Figure 6h has 'suicide' as the most prevalent word.",
"This topic's use is elevated for both depression and PTSD.",
"Figure 6f shows a topic used mostly by depressed users, while Figure 7c highlights a topic used mainly by users with PTSD.",
"Related Work Prior studies have similarly examined the efficacy of utilising social media data, like Facebook and Twitter, to ascertain the presence of both depression and PTSD.",
"For instance, Coppersmith et al.",
"(2014b) analyse differences in patterns of language use.",
"They report that individuals with PTSD were significantly more likely to use third person pronouns and significantly less likely to use second person pronouns, without mentioning differences in the use of first person pronouns.",
"This is in contrast to the strong differences in first person pronoun use among depressed individuals documented in the literature ( Rude et al., 2004; Stirman and Pennebaker, 2001) , confirmed in prior Twitter studies (Coppersmith et al., 2014a; De Choudhury et al., 2013) and replicated here.",
"De Choudhury et al.",
"(2013) explore the relationships between social media postings and depressive status, finding that geographic variables can alter one's risk.",
"They show that cities for which the highest numbers of depressive Twitter users are predicted correlate with the cities with the known highest depression rates nationwide; depressive tweets follow an expected diurnal and annual rhythm (peaking at night and during winter); and women exhibit an increased risk of depression relative to men, consistent with known psychological trends.",
"These studies thus demonstrate the utility of using social media outlets to capture nuanced data about an individual's daily psychological affect to predict pathology, and suggest that geographic and demographic factors may alter the prevalence of psychological ill-being.",
"The present study is unique in its efforts to control for some of these demographic factors, such as personality and age, that demonstrably influence an individual's pattern of language use.",
"Further, these demographic characteristics are known to significantly alter patterns e.g.",
"pronoun use (Pennebaker, 2011) .",
"This highlights the utility of controlling for these factors when analysing pathological states like depression or PTSD.",
"Conclusions This study presented a qualitative analysis of mental illness language use in users who disclosed their diagnoses.",
"For users diagnosed with depression or PTSD, we have identified both symptoms and effects of their mental condition from user-generated content.",
"The majority of our results map to clinical theory, confirming the validity of our methodology and the relevance of the dataset.",
"In our experiments, we accounted for text-derived user features, such as demographics (e.g.",
"age, gender) and personality.",
"Text-derived personality alone showed high predictive performance, in one case reaching similar performance to using orders of magnitude more textual features.",
"Our study further demonstrated the potential for using social media as a means for predicting and analysing the linguistic markers of mental illnesses.",
"However, it also raises a few questions.",
"First, although apparently easily predictable, the difference between depressed and PTSD users is largely only due to predicted age.",
"Sample demographics also appear to be different than the general population, making predictive models fitted on this data to be susceptible to over-predicting certain demographics.",
"Secondly, the language associated with a selfreported diagnosis of depression and PTSD has a large overlap with the language predictive of personality.",
"This suggests that personality may be explanatory of a particular kind of behavior: posting about mental illness diagnoses online.",
"The mental illness labels thus acquired likely have personality confounds 'baked into them', stressing the need for using stronger ground truth such as given by clinicians.",
"Further, based on the scope of the applicationswhether screening or analysis of psychological risk factors -user-generated data should at minimum be temporally partitioned to encompass content shared before and after the diagnosis.",
"This allows one to separate mentions of symptoms from discussions of and consequences of their diagnosis, such as the use of medications.",
"28"
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"5",
"5.1",
"5.2",
"5.3",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Data",
"Features",
"Age, Gender and Personality",
"Affect and Intensity",
"Textual Features",
"Prediction",
"Language Analysis",
"Language of Depression",
"Language of PTSD",
"PTSD and Depression",
"Related Work",
"Conclusions"
]
} | GEM-SciDuet-train-27#paper-1028#slide-3 | Age Gender | Model from FB and Twitter data | Model from FB and Twitter data | [] |
GEM-SciDuet-train-27#paper-1028#slide-4 | 1028 | Proceedings of the 2nd Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal The Role of Personality, Age and Gender in Tweeting about Mental Illnesses | Mental illnesses, such as depression and post traumatic stress disorder (PTSD), are highly underdiagnosed globally. Populations sharing similar demographics and personality traits are known to be more at risk than others. In this study, we characterise the language use of users disclosing their mental illness on Twitter. Language-derived personality and demographic estimates show surprisingly strong performance in distinguishing users that tweet a diagnosis of depression or PTSD from random controls, reaching an area under the receiveroperating characteristic curve -AUC -of around .8 in all our binary classification tasks. In fact, when distinguishing users disclosing depression from those disclosing PTSD, the single feature of estimated age shows nearly as strong performance (AUC = .806) as using thousands of topics (AUC = .819) or tens of thousands of n-grams (AUC = .812). We also find that differential language analyses, controlled for demographics, recover many symptoms associated with the mental illnesses in the clinical literature. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169
],
"paper_content_text": [
"Introduction Mental illnesses, such as depression and post traumatic stress disorder (PTSD) represent a large share of the global burden of disease (Üstün et al., 2004; Mathers and Loncar, 2006) , but are underdiagnosed and undertreated around the world (Prince et al., 2007) .",
"Previous research has demonstrated the important role of demographic factors in depression risk.",
"For example, while clinically-assessed depression is estimated at 6.6% in a 12-month interval for U.S. adults , the prevalence in males is 3-5%, while the prevalence is 8-10% in females (Andrade et al., 2003) .",
"Similarly, prevalence of PTSD among U.S. adults in any 12-month period is estimated at 3.5% (Kessler et al., 2005b ) -1.8% in males and 5.2% in females -yet this risk is not distributed evenly across age groups; prevalence of PTSD increases throughout the majority of the lifespan to reach a peak of 9.2% between the ages of 49-59, before dropping sharply to 2.5% past the age of 60.",
"(Kessler et al., 2005a) .",
"Large scale user-generated content provides the opportunity to extract information not only about events, but also about the person posting them.",
"Using automatic methods, a wide set of user characteristics, such as age, gender, personality, location and income have been shown to be predictable from shared social media text.",
"The same holds for mental illnesses, from users expressing symptoms of their illness (e.g.",
"low mood, focus on the self, high anxiety) to talking about effects of their illness (e.g.",
"mentioning medications and therapy) and to even self-disclosing the illness.",
"This study represents an analysis of language use in users who share their mental illness though social media, in this case depression and PTSD.",
"We advocate adjusting for important underlying demographic factors, such as age and gender, to avoid confounding by language specific to these underlying characteristics.",
"The age and gender trends from the U.S. population are present in our dataset, although imperfectly, given the biases of self-reports and social media sampling.",
"Our differential language analyses show symptoms associated with these illnesses congruent with existing clinical theory and consequences of diagnoses.",
"In addition to age and gender, we focus on the important role of inferred personality in predicting 21 mental illness.",
"We show that a model which uses only the text-predicted user level 'Big Five' personality dimensions plus age and gender perform with high accuracy, comparable to methods that use standard dictionaries of psychology as features.",
"Users who self-report a diagnosis appear more neurotic and more introverted when compared to average users.",
"Data We use a dataset of Twitter users reported to suffer from a mental illness, specifically depression and post traumatic stress disorder (PTSD).",
"This dataset was first introduced in (Coppersmith et al., 2014a) .",
"The self-reports are collected by searching a large Twitter archive for disclosures using a regular expression (e.g.",
"'I have been diagnosed with depression').",
"Candidate users were filtered manually and then all their most recent tweets have been continuously crawled using the Twitter Search API.",
"The selfdisclosure messages were excluded from the dataset and from the estimation of user inferred demographics and personality scores.",
"The control users were selected at random from Twitter.",
"In total there are 370 users diagnosed only with PTSD, 483 only with depression and 1104 control users.",
"On average, each user has 3400.8 messages.",
"As Coppersmith et al.",
"(2014b) acknowledge, this method of collection is susceptible to multiple biases, but represents a simple way to build a large dataset of users and their textual information.",
"Features We use the Twitter posts of a user to infer several user traits which we expect to be relevant to mental illnesses based on standard clinical criteria (American Psychiatric Association, 2013).",
"Recently, automatic user profiling methods have used on usergenerated text and complementary features in order to predict different user traits such as: age (Nguyen et al., 2011) , gender (Sap et al., 2014) , location (Cheng et al., 2010) , impact (Lampos et al., 2014) , political preference (Volkova et al., 2014) , temporal orientation or personality (Schwartz et al., 2013) .",
"Age, Gender and Personality We use the methods developed in (Schwartz et al., 2013) to assign each user scores for age, gender and personality from the popular five factor model of personality -'Big Five ' -(McCrae and John, 1992) , which consists of five dimensions: extraversion, agreeableness, conscientiousness, neuroticism and openness to experience.",
"The model was trained on a large sample of around 70,000 Facebook users who have taken Big Five personality tests and shared their posts using a model using 1-3 grams and topics as features Schwartz et al., 2013) .",
"This model achieves R > .3 predictive performance for all five traits.",
"This dataset is also used to obtain age and gender adjusted personality and topic distributions.",
"Affect and Intensity Emotions play an important role in the diagnosis of mental illness (American Psychiatric Association, 2013) .",
"We aim to capture the expression of users' emotions through their generated posts.",
"We characterize expressions along the dimensions of affect (from positive to negative) and intensity (from low to high), which correspond to the two primary axes of the circumplex model, a well-established system for describing emotional states (Posner et al., 2005) .",
"Machine learning approaches perform significantly better at quantifying emotion/sentiment from text compared to lexicon-based methods (Pang and Lee, 2008) .",
"Emotions are expressed at message-level.",
"Consequently, we trained a text classification model on 3,000 Facebook posts labeled by affect and intensity using unigrams as features.",
"We applied this model on each user's posts and aggregated over them to obtain a user score for both dimensions.",
"Textual Features For our qualitative text analysis we extract textual features from all of a user's Twitter posts.",
"Traditional psychological studies use a closed-vocabulary approach to modelling text.",
"The most popular method is based on Linguistic Inquiry and Word Count (LIWC) .",
"In LIWC, psychological theory was used to build 64 different categories.",
"These include different parts-of-speech, topical categories and emotions.",
"Each user is thereby represented as a distribution over these categories.",
"We also use all frequent 1-3 grams (used by more than 10% of users in our dataset), where we use pointwise mutual information (PMI) to filter infrequent 2-3 grams.",
"For a better qualitative assessment and to reduce risk of overfitting, we use a set of topics as a form of dimensionality reduction.",
"We use the 2,000 clusters introduced in (Schwartz et al., 2013) obtained by applying Latent Dirichlet Allocation (Blei et al., 2003) , the most popular topic model, to a large set of Facebook posts.",
"Prediction In this section we present an analysis of the predictive power of inferred user-level features.",
"We use the methods introduced in Section 3 to predict nine user level scores: age, gender, affect, intensity and the Big Five personality traits.",
"The three populations in our dataset are used to formulate three binary classification problems in order to analyse specific pairwise group peculiarities.",
"Users having both PTSD and depression are held-out when classifying between these two classes.",
"To assess the power of our text-derived features, we use as features broader textual features such as the LIWC categories, the LDA inferred topics and frequent 1-3 grams.",
"We train binary logistic regression classifiers (Pedregosa et al., 2011) with Elastic Net regularisation (Zou and Hastie, 2005) .",
"In Table 1 we report the performance using 10-fold cross-validation.",
"Performance is measured using ROC area under the curve (ROC AUC), an adequate measure when the classes are imbalanced.",
"A more thorough study of predictive performance for identifying PTSD and depressed users is presented in (Preoţiuc-Pietro et al., 2015) .",
"Our results show the following: • Age alone improves over chance and is highly predictive when classifying PTSD users.",
"To visualise the effect of age, Figure 1 shows the probability density function in our three populations.",
"This highlights that PTSD users are consistently predicted older than both controls and depressed users.",
"This is in line with findings from the National Comorbidity Survey and replications (Kessler et al., 2005a ; Kessler et al., Figure 1 : Age density functions for each group.",
"• Gender is only weakly predictive of any mental illness, although significantly above chance in depressed vs. controls (p < .01, DeLong test 1 ).",
"Interestingly, in this task age and gender combined improve significantly above each individual prediction, illustrating they contain complementary information.",
"Consequently, at least when analysing depression, gender should be accounted for in addition to age.",
"• Personality alone obtains very good predictive accuracies, reaching over .8 ROC AUC for classifying depressed vs. PTSD.",
"In general, personality features alone perform with strong predictive accuracy, within .1 of >5000 unigram features or 2000 topics.",
"Adding age and gender information further improves predictive power (C-P p < .01, D-P p < .01, DeLong test) when PTSD is one of the compared groups.",
"In Figure 2 we show the mean personality scores across the three groups.",
"In this dataset, PTSD users score highest on average in openness with depressed users scoring lowest.",
"However, neuroticism is the largest separator between mentally ill users and the controls, with depressed having slightly higher levels of neuroticism than PTSD.",
"Neuroticism alone has an ROC AUC of .732 in prediction depression vs. control and .674 in predicting PTSD vs. control.",
"Controls score higher on extraversion, a trait related to the frequency and intensity of positive emotions (Smillie et al., 2012) .",
"Controlling for age (Figure 2b ) significantly reduces the initial association between PTSD and higher conscientiousness, because PTSD users are likely to be older, and conscientiousness tends to increase with age (Soto et al., 2011) .",
"After controlling, depressed users score lowest on conscientiousness, while PTSD and controls are close to each other.",
"• Average affect and intensity achieve modest predictive performance, although significant (C-D p < .001, D-P p < .001, DeLong test) when one of the compared groups are depressed.",
"We use the two features to map users to the emotion circumplex in Figure 3 .",
"On average, control users expressed both higher intensity and higher (i.e.",
"more positive) affect, while depressed users were lowest on both.",
"This is consistent with the lowered (i.e.",
"more negative) affect typically seen in both PTSD and depressed patients, and the increased intensity/arousal among PTSD users may correspond to more frequent expressions of anxiety, which is characterized by high arousal and lower/negative affect (American Psychiatric Association, 2013).",
"• Textual features obtain high predictive performance.",
"Out of these, LIWC performs the worst, while the topics, unigrams and 1-3 grams have similarly high performance.",
"In addition to ROC AUC scores, we present ROC curves for all three binary prediction tasks in Figures 4a, 4b and 4c .",
"ROC curves are specifically useful for medical practitioners because the classification threshold can be adjusted to choose an applicationappropriate level of false positives.",
"For comparison, we display methods using only age and gender; age, gender and personality combined, as well as LIWC and the LDA topics.",
"For classifying depressed users from controls, a true positive rate of ∼ 0.6 can be achieved at a false positive rate of ∼ 0.2 using personality, age and gender alone, with an increase to up to ∼ 0.7 when PTSD users are one of the groups.",
"When classifying PTSD users, age is the most important factor.",
"Separating between depressed and PTSD is almost exclusively a factor of age.",
"This suggests that a application in a real life scenario will likely overpredict older users to have PTSD.",
"Language Analysis The very high predictive power of the user-level features and textual features motivates us to analyse the linguistic features associated with each group, taking into account age and gender.",
"We study differences in language between groups using differential language analysis -DLA (Schwartz et al., 2013) .",
"This method aims to find all the most discriminative features between two groups by correlating each individual feature (1-3 gram or topic) to the class label.",
"In our case, age and gender are included as covariates in order to control for the effect they may have on the outcome.",
"Since a large number of features are explored, we consider coefficients significant if they meet a Bonferroni-corrected two-tailed p-value of less than 0.001.",
"Language of Depression The word cloud in Figure 5a displays the 1-3 grams that most distinguish the depressed users from the set of control users.",
"Many features show face validity (e.g.",
"'depressed'), but also appear to represent a number of the cognitive and emotional processes implicated in depression in the literature (American Psychiatric Association, 2013).",
"1-3 grams seem to disclose information relating to illness and illness management (e.g.",
"'depressed', 'illness', 'meds', 'pills', 'therapy').",
"In some of the most strongly correlated features we also observe an increased focus on the self (e.g.",
"'I', 'I am', 'I have', 'I haven't', 'I was', 'myself') which has been found to accompany depression in many studies and often accompanies states of psychological distress (Rude et al., 2004; Stirman and Pennebaker, 2001; Bucci and Freedman, 1981) .",
"Depression classically relies on the presence of two sets of core symptoms: sustained periods of low mood (dysphoria) and low interest (anhedonia) (American Psychiatric Association, 2013) .",
"Phrases such as 'cry' and 'crying' suggest low mood, while 'anymore' and 'I used to' may suggest a discontinuation of activities.",
"Suicidal ideations or more general thoughts of death and dying are symptoms used in the diagnosis of depression, and even though they are relatively rarely mentioned (grey color), are identified in the differential language analysis (e.g.",
"'suicide', 'to die').",
"Beyond what is generally thought of as the key symptoms of depression discussed above, the differential language analysis also suggests that anger and interpersonal hostility ('fucking') feature significantly in the language use of depressed users.",
"The 10 topics most associated with depression (correlation values ranging from R = .282 to R = .229) suggest similar themes, including dysphoria (e.g.",
"'lonely', 'sad', 'crying' -Figures 6b, 6c, 6f ) and thoughts of death (e.g.",
"'suicide' - Figure 6h ).",
"Figure 5 : The word clouds show the 1-3 grams most correlated with each group having a mental illness, with the set of control users serving as the contrastive set in both cases.",
"The size of the 1-3 gram is scaled by the correlation to binary depression label (point-biserial correlation).",
"The color indexes relative frequency, from grey (rarely used) through blue (moderately used) to red (frequently used).",
"Correlations are controlled for age and gender.",
"Language of PTSD The word cloud in Figure 5b and topic clouds in Figure 7 display the 1-3 grams and topics most correlated with PTSD, with topic correlation values ranging from R = .280 to R = .237.",
"On the whole, the language most predictive of PTSD does not map as cleanly onto the symptoms and criteria for diagnosis of PTSD as was the case with depression.",
"Across topics and 1-3 grams, the language most correlated with PTSD suggests 'depression', disease management (e.g.",
"'pain', 'pills', 'meds' - Figure 7c ) and a focus on the self (e.g.",
"'I had', 'I was', 'I am', 'I would').",
"Similarly, language is suggestive of death (e.g.",
"'suicide', 'suicidal').",
"Compared to the language of depressed users, themes within the language of users with PTSD appear to reference traumatic experiences that are required for a diagnosis of PTSD (e.g.",
"'murdered', 'died'), as well as the resultant states of fear-like psychological distress (e.g.",
"'terrified', 'anxiety').",
"PTSD and Depression From our predictive experiments and Figure 4c , we see that language-predicted age almost completely differentiates between PTSD and depressed users.",
"Consequently, we find only a few features that distinguish between the two groups when controlling for age.",
"To visualise differences between the diseases we visualize topic usage in both groups in Figure 8 .",
"This shows standardised usage in both groups for each topic.",
"As an additional factor (color), we include Figure 6 : The LDA topics most correlated with depression controlling for age and gender, with the set of control users serving as the contrastive set.",
"Word size is proportional to the probability of the word within the topics.",
"Color is for display only.",
"Figure 7 : The LDA topics most correlated with PTSD controlling for age and gender, with the set of control users serving as the contrastive set.",
"Word size is proportional to the probability of the word within the topics.",
"Color is for display only.",
"the personality trait of neuroticism.",
"This plays the most important role in separating between mentally ill users and controls.",
"The topics marked by arrows in Figure 8 are some of the topics most used by users with depression and PTSD shown above in Figures 6-7 .",
"Of the three topics, the topic shown in Figure 6h has 'suicide' as the most prevalent word.",
"This topic's use is elevated for both depression and PTSD.",
"Figure 6f shows a topic used mostly by depressed users, while Figure 7c highlights a topic used mainly by users with PTSD.",
"Related Work Prior studies have similarly examined the efficacy of utilising social media data, like Facebook and Twitter, to ascertain the presence of both depression and PTSD.",
"For instance, Coppersmith et al.",
"(2014b) analyse differences in patterns of language use.",
"They report that individuals with PTSD were significantly more likely to use third person pronouns and significantly less likely to use second person pronouns, without mentioning differences in the use of first person pronouns.",
"This is in contrast to the strong differences in first person pronoun use among depressed individuals documented in the literature ( Rude et al., 2004; Stirman and Pennebaker, 2001) , confirmed in prior Twitter studies (Coppersmith et al., 2014a; De Choudhury et al., 2013) and replicated here.",
"De Choudhury et al.",
"(2013) explore the relationships between social media postings and depressive status, finding that geographic variables can alter one's risk.",
"They show that cities for which the highest numbers of depressive Twitter users are predicted correlate with the cities with the known highest depression rates nationwide; depressive tweets follow an expected diurnal and annual rhythm (peaking at night and during winter); and women exhibit an increased risk of depression relative to men, consistent with known psychological trends.",
"These studies thus demonstrate the utility of using social media outlets to capture nuanced data about an individual's daily psychological affect to predict pathology, and suggest that geographic and demographic factors may alter the prevalence of psychological ill-being.",
"The present study is unique in its efforts to control for some of these demographic factors, such as personality and age, that demonstrably influence an individual's pattern of language use.",
"Further, these demographic characteristics are known to significantly alter patterns e.g.",
"pronoun use (Pennebaker, 2011) .",
"This highlights the utility of controlling for these factors when analysing pathological states like depression or PTSD.",
"Conclusions This study presented a qualitative analysis of mental illness language use in users who disclosed their diagnoses.",
"For users diagnosed with depression or PTSD, we have identified both symptoms and effects of their mental condition from user-generated content.",
"The majority of our results map to clinical theory, confirming the validity of our methodology and the relevance of the dataset.",
"In our experiments, we accounted for text-derived user features, such as demographics (e.g.",
"age, gender) and personality.",
"Text-derived personality alone showed high predictive performance, in one case reaching similar performance to using orders of magnitude more textual features.",
"Our study further demonstrated the potential for using social media as a means for predicting and analysing the linguistic markers of mental illnesses.",
"However, it also raises a few questions.",
"First, although apparently easily predictable, the difference between depressed and PTSD users is largely only due to predicted age.",
"Sample demographics also appear to be different than the general population, making predictive models fitted on this data to be susceptible to over-predicting certain demographics.",
"Secondly, the language associated with a selfreported diagnosis of depression and PTSD has a large overlap with the language predictive of personality.",
"This suggests that personality may be explanatory of a particular kind of behavior: posting about mental illness diagnoses online.",
"The mental illness labels thus acquired likely have personality confounds 'baked into them', stressing the need for using stronger ground truth such as given by clinicians.",
"Further, based on the scope of the applicationswhether screening or analysis of psychological risk factors -user-generated data should at minimum be temporally partitioned to encompass content shared before and after the diagnosis.",
"This allows one to separate mentions of symptoms from discussions of and consequences of their diagnosis, such as the use of medications.",
"28"
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"5",
"5.1",
"5.2",
"5.3",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Data",
"Features",
"Age, Gender and Personality",
"Affect and Intensity",
"Textual Features",
"Prediction",
"Language Analysis",
"Language of Depression",
"Language of PTSD",
"PTSD and Depression",
"Related Work",
"Conclusions"
]
} | GEM-SciDuet-train-27#paper-1028#slide-4 | Personality | high on neuroticism more introverted less agreeable
controlling for age and gender | high on neuroticism more introverted less agreeable
controlling for age and gender | [] |
GEM-SciDuet-train-27#paper-1028#slide-6 | 1028 | Proceedings of the 2nd Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal The Role of Personality, Age and Gender in Tweeting about Mental Illnesses | Mental illnesses, such as depression and post traumatic stress disorder (PTSD), are highly underdiagnosed globally. Populations sharing similar demographics and personality traits are known to be more at risk than others. In this study, we characterise the language use of users disclosing their mental illness on Twitter. Language-derived personality and demographic estimates show surprisingly strong performance in distinguishing users that tweet a diagnosis of depression or PTSD from random controls, reaching an area under the receiveroperating characteristic curve -AUC -of around .8 in all our binary classification tasks. In fact, when distinguishing users disclosing depression from those disclosing PTSD, the single feature of estimated age shows nearly as strong performance (AUC = .806) as using thousands of topics (AUC = .819) or tens of thousands of n-grams (AUC = .812). We also find that differential language analyses, controlled for demographics, recover many symptoms associated with the mental illnesses in the clinical literature. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169
],
"paper_content_text": [
"Introduction Mental illnesses, such as depression and post traumatic stress disorder (PTSD) represent a large share of the global burden of disease (Üstün et al., 2004; Mathers and Loncar, 2006) , but are underdiagnosed and undertreated around the world (Prince et al., 2007) .",
"Previous research has demonstrated the important role of demographic factors in depression risk.",
"For example, while clinically-assessed depression is estimated at 6.6% in a 12-month interval for U.S. adults , the prevalence in males is 3-5%, while the prevalence is 8-10% in females (Andrade et al., 2003) .",
"Similarly, prevalence of PTSD among U.S. adults in any 12-month period is estimated at 3.5% (Kessler et al., 2005b ) -1.8% in males and 5.2% in females -yet this risk is not distributed evenly across age groups; prevalence of PTSD increases throughout the majority of the lifespan to reach a peak of 9.2% between the ages of 49-59, before dropping sharply to 2.5% past the age of 60.",
"(Kessler et al., 2005a) .",
"Large scale user-generated content provides the opportunity to extract information not only about events, but also about the person posting them.",
"Using automatic methods, a wide set of user characteristics, such as age, gender, personality, location and income have been shown to be predictable from shared social media text.",
"The same holds for mental illnesses, from users expressing symptoms of their illness (e.g.",
"low mood, focus on the self, high anxiety) to talking about effects of their illness (e.g.",
"mentioning medications and therapy) and to even self-disclosing the illness.",
"This study represents an analysis of language use in users who share their mental illness though social media, in this case depression and PTSD.",
"We advocate adjusting for important underlying demographic factors, such as age and gender, to avoid confounding by language specific to these underlying characteristics.",
"The age and gender trends from the U.S. population are present in our dataset, although imperfectly, given the biases of self-reports and social media sampling.",
"Our differential language analyses show symptoms associated with these illnesses congruent with existing clinical theory and consequences of diagnoses.",
"In addition to age and gender, we focus on the important role of inferred personality in predicting 21 mental illness.",
"We show that a model which uses only the text-predicted user level 'Big Five' personality dimensions plus age and gender perform with high accuracy, comparable to methods that use standard dictionaries of psychology as features.",
"Users who self-report a diagnosis appear more neurotic and more introverted when compared to average users.",
"Data We use a dataset of Twitter users reported to suffer from a mental illness, specifically depression and post traumatic stress disorder (PTSD).",
"This dataset was first introduced in (Coppersmith et al., 2014a) .",
"The self-reports are collected by searching a large Twitter archive for disclosures using a regular expression (e.g.",
"'I have been diagnosed with depression').",
"Candidate users were filtered manually and then all their most recent tweets have been continuously crawled using the Twitter Search API.",
"The selfdisclosure messages were excluded from the dataset and from the estimation of user inferred demographics and personality scores.",
"The control users were selected at random from Twitter.",
"In total there are 370 users diagnosed only with PTSD, 483 only with depression and 1104 control users.",
"On average, each user has 3400.8 messages.",
"As Coppersmith et al.",
"(2014b) acknowledge, this method of collection is susceptible to multiple biases, but represents a simple way to build a large dataset of users and their textual information.",
"Features We use the Twitter posts of a user to infer several user traits which we expect to be relevant to mental illnesses based on standard clinical criteria (American Psychiatric Association, 2013).",
"Recently, automatic user profiling methods have used on usergenerated text and complementary features in order to predict different user traits such as: age (Nguyen et al., 2011) , gender (Sap et al., 2014) , location (Cheng et al., 2010) , impact (Lampos et al., 2014) , political preference (Volkova et al., 2014) , temporal orientation or personality (Schwartz et al., 2013) .",
"Age, Gender and Personality We use the methods developed in (Schwartz et al., 2013) to assign each user scores for age, gender and personality from the popular five factor model of personality -'Big Five ' -(McCrae and John, 1992) , which consists of five dimensions: extraversion, agreeableness, conscientiousness, neuroticism and openness to experience.",
"The model was trained on a large sample of around 70,000 Facebook users who have taken Big Five personality tests and shared their posts using a model using 1-3 grams and topics as features Schwartz et al., 2013) .",
"This model achieves R > .3 predictive performance for all five traits.",
"This dataset is also used to obtain age and gender adjusted personality and topic distributions.",
"Affect and Intensity Emotions play an important role in the diagnosis of mental illness (American Psychiatric Association, 2013) .",
"We aim to capture the expression of users' emotions through their generated posts.",
"We characterize expressions along the dimensions of affect (from positive to negative) and intensity (from low to high), which correspond to the two primary axes of the circumplex model, a well-established system for describing emotional states (Posner et al., 2005) .",
"Machine learning approaches perform significantly better at quantifying emotion/sentiment from text compared to lexicon-based methods (Pang and Lee, 2008) .",
"Emotions are expressed at message-level.",
"Consequently, we trained a text classification model on 3,000 Facebook posts labeled by affect and intensity using unigrams as features.",
"We applied this model on each user's posts and aggregated over them to obtain a user score for both dimensions.",
"Textual Features For our qualitative text analysis we extract textual features from all of a user's Twitter posts.",
"Traditional psychological studies use a closed-vocabulary approach to modelling text.",
"The most popular method is based on Linguistic Inquiry and Word Count (LIWC) .",
"In LIWC, psychological theory was used to build 64 different categories.",
"These include different parts-of-speech, topical categories and emotions.",
"Each user is thereby represented as a distribution over these categories.",
"We also use all frequent 1-3 grams (used by more than 10% of users in our dataset), where we use pointwise mutual information (PMI) to filter infrequent 2-3 grams.",
"For a better qualitative assessment and to reduce risk of overfitting, we use a set of topics as a form of dimensionality reduction.",
"We use the 2,000 clusters introduced in (Schwartz et al., 2013) obtained by applying Latent Dirichlet Allocation (Blei et al., 2003) , the most popular topic model, to a large set of Facebook posts.",
"Prediction In this section we present an analysis of the predictive power of inferred user-level features.",
"We use the methods introduced in Section 3 to predict nine user level scores: age, gender, affect, intensity and the Big Five personality traits.",
"The three populations in our dataset are used to formulate three binary classification problems in order to analyse specific pairwise group peculiarities.",
"Users having both PTSD and depression are held-out when classifying between these two classes.",
"To assess the power of our text-derived features, we use as features broader textual features such as the LIWC categories, the LDA inferred topics and frequent 1-3 grams.",
"We train binary logistic regression classifiers (Pedregosa et al., 2011) with Elastic Net regularisation (Zou and Hastie, 2005) .",
"In Table 1 we report the performance using 10-fold cross-validation.",
"Performance is measured using ROC area under the curve (ROC AUC), an adequate measure when the classes are imbalanced.",
"A more thorough study of predictive performance for identifying PTSD and depressed users is presented in (Preoţiuc-Pietro et al., 2015) .",
"Our results show the following: • Age alone improves over chance and is highly predictive when classifying PTSD users.",
"To visualise the effect of age, Figure 1 shows the probability density function in our three populations.",
"This highlights that PTSD users are consistently predicted older than both controls and depressed users.",
"This is in line with findings from the National Comorbidity Survey and replications (Kessler et al., 2005a ; Kessler et al., Figure 1 : Age density functions for each group.",
"• Gender is only weakly predictive of any mental illness, although significantly above chance in depressed vs. controls (p < .01, DeLong test 1 ).",
"Interestingly, in this task age and gender combined improve significantly above each individual prediction, illustrating they contain complementary information.",
"Consequently, at least when analysing depression, gender should be accounted for in addition to age.",
"• Personality alone obtains very good predictive accuracies, reaching over .8 ROC AUC for classifying depressed vs. PTSD.",
"In general, personality features alone perform with strong predictive accuracy, within .1 of >5000 unigram features or 2000 topics.",
"Adding age and gender information further improves predictive power (C-P p < .01, D-P p < .01, DeLong test) when PTSD is one of the compared groups.",
"In Figure 2 we show the mean personality scores across the three groups.",
"In this dataset, PTSD users score highest on average in openness with depressed users scoring lowest.",
"However, neuroticism is the largest separator between mentally ill users and the controls, with depressed having slightly higher levels of neuroticism than PTSD.",
"Neuroticism alone has an ROC AUC of .732 in prediction depression vs. control and .674 in predicting PTSD vs. control.",
"Controls score higher on extraversion, a trait related to the frequency and intensity of positive emotions (Smillie et al., 2012) .",
"Controlling for age (Figure 2b ) significantly reduces the initial association between PTSD and higher conscientiousness, because PTSD users are likely to be older, and conscientiousness tends to increase with age (Soto et al., 2011) .",
"After controlling, depressed users score lowest on conscientiousness, while PTSD and controls are close to each other.",
"• Average affect and intensity achieve modest predictive performance, although significant (C-D p < .001, D-P p < .001, DeLong test) when one of the compared groups are depressed.",
"We use the two features to map users to the emotion circumplex in Figure 3 .",
"On average, control users expressed both higher intensity and higher (i.e.",
"more positive) affect, while depressed users were lowest on both.",
"This is consistent with the lowered (i.e.",
"more negative) affect typically seen in both PTSD and depressed patients, and the increased intensity/arousal among PTSD users may correspond to more frequent expressions of anxiety, which is characterized by high arousal and lower/negative affect (American Psychiatric Association, 2013).",
"• Textual features obtain high predictive performance.",
"Out of these, LIWC performs the worst, while the topics, unigrams and 1-3 grams have similarly high performance.",
"In addition to ROC AUC scores, we present ROC curves for all three binary prediction tasks in Figures 4a, 4b and 4c .",
"ROC curves are specifically useful for medical practitioners because the classification threshold can be adjusted to choose an applicationappropriate level of false positives.",
"For comparison, we display methods using only age and gender; age, gender and personality combined, as well as LIWC and the LDA topics.",
"For classifying depressed users from controls, a true positive rate of ∼ 0.6 can be achieved at a false positive rate of ∼ 0.2 using personality, age and gender alone, with an increase to up to ∼ 0.7 when PTSD users are one of the groups.",
"When classifying PTSD users, age is the most important factor.",
"Separating between depressed and PTSD is almost exclusively a factor of age.",
"This suggests that a application in a real life scenario will likely overpredict older users to have PTSD.",
"Language Analysis The very high predictive power of the user-level features and textual features motivates us to analyse the linguistic features associated with each group, taking into account age and gender.",
"We study differences in language between groups using differential language analysis -DLA (Schwartz et al., 2013) .",
"This method aims to find all the most discriminative features between two groups by correlating each individual feature (1-3 gram or topic) to the class label.",
"In our case, age and gender are included as covariates in order to control for the effect they may have on the outcome.",
"Since a large number of features are explored, we consider coefficients significant if they meet a Bonferroni-corrected two-tailed p-value of less than 0.001.",
"Language of Depression The word cloud in Figure 5a displays the 1-3 grams that most distinguish the depressed users from the set of control users.",
"Many features show face validity (e.g.",
"'depressed'), but also appear to represent a number of the cognitive and emotional processes implicated in depression in the literature (American Psychiatric Association, 2013).",
"1-3 grams seem to disclose information relating to illness and illness management (e.g.",
"'depressed', 'illness', 'meds', 'pills', 'therapy').",
"In some of the most strongly correlated features we also observe an increased focus on the self (e.g.",
"'I', 'I am', 'I have', 'I haven't', 'I was', 'myself') which has been found to accompany depression in many studies and often accompanies states of psychological distress (Rude et al., 2004; Stirman and Pennebaker, 2001; Bucci and Freedman, 1981) .",
"Depression classically relies on the presence of two sets of core symptoms: sustained periods of low mood (dysphoria) and low interest (anhedonia) (American Psychiatric Association, 2013) .",
"Phrases such as 'cry' and 'crying' suggest low mood, while 'anymore' and 'I used to' may suggest a discontinuation of activities.",
"Suicidal ideations or more general thoughts of death and dying are symptoms used in the diagnosis of depression, and even though they are relatively rarely mentioned (grey color), are identified in the differential language analysis (e.g.",
"'suicide', 'to die').",
"Beyond what is generally thought of as the key symptoms of depression discussed above, the differential language analysis also suggests that anger and interpersonal hostility ('fucking') feature significantly in the language use of depressed users.",
"The 10 topics most associated with depression (correlation values ranging from R = .282 to R = .229) suggest similar themes, including dysphoria (e.g.",
"'lonely', 'sad', 'crying' -Figures 6b, 6c, 6f ) and thoughts of death (e.g.",
"'suicide' - Figure 6h ).",
"Figure 5 : The word clouds show the 1-3 grams most correlated with each group having a mental illness, with the set of control users serving as the contrastive set in both cases.",
"The size of the 1-3 gram is scaled by the correlation to binary depression label (point-biserial correlation).",
"The color indexes relative frequency, from grey (rarely used) through blue (moderately used) to red (frequently used).",
"Correlations are controlled for age and gender.",
"Language of PTSD The word cloud in Figure 5b and topic clouds in Figure 7 display the 1-3 grams and topics most correlated with PTSD, with topic correlation values ranging from R = .280 to R = .237.",
"On the whole, the language most predictive of PTSD does not map as cleanly onto the symptoms and criteria for diagnosis of PTSD as was the case with depression.",
"Across topics and 1-3 grams, the language most correlated with PTSD suggests 'depression', disease management (e.g.",
"'pain', 'pills', 'meds' - Figure 7c ) and a focus on the self (e.g.",
"'I had', 'I was', 'I am', 'I would').",
"Similarly, language is suggestive of death (e.g.",
"'suicide', 'suicidal').",
"Compared to the language of depressed users, themes within the language of users with PTSD appear to reference traumatic experiences that are required for a diagnosis of PTSD (e.g.",
"'murdered', 'died'), as well as the resultant states of fear-like psychological distress (e.g.",
"'terrified', 'anxiety').",
"PTSD and Depression From our predictive experiments and Figure 4c , we see that language-predicted age almost completely differentiates between PTSD and depressed users.",
"Consequently, we find only a few features that distinguish between the two groups when controlling for age.",
"To visualise differences between the diseases we visualize topic usage in both groups in Figure 8 .",
"This shows standardised usage in both groups for each topic.",
"As an additional factor (color), we include Figure 6 : The LDA topics most correlated with depression controlling for age and gender, with the set of control users serving as the contrastive set.",
"Word size is proportional to the probability of the word within the topics.",
"Color is for display only.",
"Figure 7 : The LDA topics most correlated with PTSD controlling for age and gender, with the set of control users serving as the contrastive set.",
"Word size is proportional to the probability of the word within the topics.",
"Color is for display only.",
"the personality trait of neuroticism.",
"This plays the most important role in separating between mentally ill users and controls.",
"The topics marked by arrows in Figure 8 are some of the topics most used by users with depression and PTSD shown above in Figures 6-7 .",
"Of the three topics, the topic shown in Figure 6h has 'suicide' as the most prevalent word.",
"This topic's use is elevated for both depression and PTSD.",
"Figure 6f shows a topic used mostly by depressed users, while Figure 7c highlights a topic used mainly by users with PTSD.",
"Related Work Prior studies have similarly examined the efficacy of utilising social media data, like Facebook and Twitter, to ascertain the presence of both depression and PTSD.",
"For instance, Coppersmith et al.",
"(2014b) analyse differences in patterns of language use.",
"They report that individuals with PTSD were significantly more likely to use third person pronouns and significantly less likely to use second person pronouns, without mentioning differences in the use of first person pronouns.",
"This is in contrast to the strong differences in first person pronoun use among depressed individuals documented in the literature ( Rude et al., 2004; Stirman and Pennebaker, 2001) , confirmed in prior Twitter studies (Coppersmith et al., 2014a; De Choudhury et al., 2013) and replicated here.",
"De Choudhury et al.",
"(2013) explore the relationships between social media postings and depressive status, finding that geographic variables can alter one's risk.",
"They show that cities for which the highest numbers of depressive Twitter users are predicted correlate with the cities with the known highest depression rates nationwide; depressive tweets follow an expected diurnal and annual rhythm (peaking at night and during winter); and women exhibit an increased risk of depression relative to men, consistent with known psychological trends.",
"These studies thus demonstrate the utility of using social media outlets to capture nuanced data about an individual's daily psychological affect to predict pathology, and suggest that geographic and demographic factors may alter the prevalence of psychological ill-being.",
"The present study is unique in its efforts to control for some of these demographic factors, such as personality and age, that demonstrably influence an individual's pattern of language use.",
"Further, these demographic characteristics are known to significantly alter patterns e.g.",
"pronoun use (Pennebaker, 2011) .",
"This highlights the utility of controlling for these factors when analysing pathological states like depression or PTSD.",
"Conclusions This study presented a qualitative analysis of mental illness language use in users who disclosed their diagnoses.",
"For users diagnosed with depression or PTSD, we have identified both symptoms and effects of their mental condition from user-generated content.",
"The majority of our results map to clinical theory, confirming the validity of our methodology and the relevance of the dataset.",
"In our experiments, we accounted for text-derived user features, such as demographics (e.g.",
"age, gender) and personality.",
"Text-derived personality alone showed high predictive performance, in one case reaching similar performance to using orders of magnitude more textual features.",
"Our study further demonstrated the potential for using social media as a means for predicting and analysing the linguistic markers of mental illnesses.",
"However, it also raises a few questions.",
"First, although apparently easily predictable, the difference between depressed and PTSD users is largely only due to predicted age.",
"Sample demographics also appear to be different than the general population, making predictive models fitted on this data to be susceptible to over-predicting certain demographics.",
"Secondly, the language associated with a selfreported diagnosis of depression and PTSD has a large overlap with the language predictive of personality.",
"This suggests that personality may be explanatory of a particular kind of behavior: posting about mental illness diagnoses online.",
"The mental illness labels thus acquired likely have personality confounds 'baked into them', stressing the need for using stronger ground truth such as given by clinicians.",
"Further, based on the scope of the applicationswhether screening or analysis of psychological risk factors -user-generated data should at minimum be temporally partitioned to encompass content shared before and after the diagnosis.",
"This allows one to separate mentions of symptoms from discussions of and consequences of their diagnosis, such as the use of medications.",
"28"
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"5",
"5.1",
"5.2",
"5.3",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Data",
"Features",
"Age, Gender and Personality",
"Affect and Intensity",
"Textual Features",
"Prediction",
"Language Analysis",
"Language of Depression",
"Language of PTSD",
"PTSD and Depression",
"Related Work",
"Conclusions"
]
} | GEM-SciDuet-train-27#paper-1028#slide-6 | Affect and Intensity | Model trained on 3000
mentally ill users are less aroused and less positive | Model trained on 3000
mentally ill users are less aroused and less positive | [] |
GEM-SciDuet-train-27#paper-1028#slide-7 | 1028 | Proceedings of the 2nd Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal The Role of Personality, Age and Gender in Tweeting about Mental Illnesses | Mental illnesses, such as depression and post traumatic stress disorder (PTSD), are highly underdiagnosed globally. Populations sharing similar demographics and personality traits are known to be more at risk than others. In this study, we characterise the language use of users disclosing their mental illness on Twitter. Language-derived personality and demographic estimates show surprisingly strong performance in distinguishing users that tweet a diagnosis of depression or PTSD from random controls, reaching an area under the receiveroperating characteristic curve -AUC -of around .8 in all our binary classification tasks. In fact, when distinguishing users disclosing depression from those disclosing PTSD, the single feature of estimated age shows nearly as strong performance (AUC = .806) as using thousands of topics (AUC = .819) or tens of thousands of n-grams (AUC = .812). We also find that differential language analyses, controlled for demographics, recover many symptoms associated with the mental illnesses in the clinical literature. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169
],
"paper_content_text": [
"Introduction Mental illnesses, such as depression and post traumatic stress disorder (PTSD) represent a large share of the global burden of disease (Üstün et al., 2004; Mathers and Loncar, 2006) , but are underdiagnosed and undertreated around the world (Prince et al., 2007) .",
"Previous research has demonstrated the important role of demographic factors in depression risk.",
"For example, while clinically-assessed depression is estimated at 6.6% in a 12-month interval for U.S. adults , the prevalence in males is 3-5%, while the prevalence is 8-10% in females (Andrade et al., 2003) .",
"Similarly, prevalence of PTSD among U.S. adults in any 12-month period is estimated at 3.5% (Kessler et al., 2005b ) -1.8% in males and 5.2% in females -yet this risk is not distributed evenly across age groups; prevalence of PTSD increases throughout the majority of the lifespan to reach a peak of 9.2% between the ages of 49-59, before dropping sharply to 2.5% past the age of 60.",
"(Kessler et al., 2005a) .",
"Large scale user-generated content provides the opportunity to extract information not only about events, but also about the person posting them.",
"Using automatic methods, a wide set of user characteristics, such as age, gender, personality, location and income have been shown to be predictable from shared social media text.",
"The same holds for mental illnesses, from users expressing symptoms of their illness (e.g.",
"low mood, focus on the self, high anxiety) to talking about effects of their illness (e.g.",
"mentioning medications and therapy) and to even self-disclosing the illness.",
"This study represents an analysis of language use in users who share their mental illness though social media, in this case depression and PTSD.",
"We advocate adjusting for important underlying demographic factors, such as age and gender, to avoid confounding by language specific to these underlying characteristics.",
"The age and gender trends from the U.S. population are present in our dataset, although imperfectly, given the biases of self-reports and social media sampling.",
"Our differential language analyses show symptoms associated with these illnesses congruent with existing clinical theory and consequences of diagnoses.",
"In addition to age and gender, we focus on the important role of inferred personality in predicting 21 mental illness.",
"We show that a model which uses only the text-predicted user level 'Big Five' personality dimensions plus age and gender perform with high accuracy, comparable to methods that use standard dictionaries of psychology as features.",
"Users who self-report a diagnosis appear more neurotic and more introverted when compared to average users.",
"Data We use a dataset of Twitter users reported to suffer from a mental illness, specifically depression and post traumatic stress disorder (PTSD).",
"This dataset was first introduced in (Coppersmith et al., 2014a) .",
"The self-reports are collected by searching a large Twitter archive for disclosures using a regular expression (e.g.",
"'I have been diagnosed with depression').",
"Candidate users were filtered manually and then all their most recent tweets have been continuously crawled using the Twitter Search API.",
"The selfdisclosure messages were excluded from the dataset and from the estimation of user inferred demographics and personality scores.",
"The control users were selected at random from Twitter.",
"In total there are 370 users diagnosed only with PTSD, 483 only with depression and 1104 control users.",
"On average, each user has 3400.8 messages.",
"As Coppersmith et al.",
"(2014b) acknowledge, this method of collection is susceptible to multiple biases, but represents a simple way to build a large dataset of users and their textual information.",
"Features We use the Twitter posts of a user to infer several user traits which we expect to be relevant to mental illnesses based on standard clinical criteria (American Psychiatric Association, 2013).",
"Recently, automatic user profiling methods have used on usergenerated text and complementary features in order to predict different user traits such as: age (Nguyen et al., 2011) , gender (Sap et al., 2014) , location (Cheng et al., 2010) , impact (Lampos et al., 2014) , political preference (Volkova et al., 2014) , temporal orientation or personality (Schwartz et al., 2013) .",
"Age, Gender and Personality We use the methods developed in (Schwartz et al., 2013) to assign each user scores for age, gender and personality from the popular five factor model of personality -'Big Five ' -(McCrae and John, 1992) , which consists of five dimensions: extraversion, agreeableness, conscientiousness, neuroticism and openness to experience.",
"The model was trained on a large sample of around 70,000 Facebook users who have taken Big Five personality tests and shared their posts using a model using 1-3 grams and topics as features Schwartz et al., 2013) .",
"This model achieves R > .3 predictive performance for all five traits.",
"This dataset is also used to obtain age and gender adjusted personality and topic distributions.",
"Affect and Intensity Emotions play an important role in the diagnosis of mental illness (American Psychiatric Association, 2013) .",
"We aim to capture the expression of users' emotions through their generated posts.",
"We characterize expressions along the dimensions of affect (from positive to negative) and intensity (from low to high), which correspond to the two primary axes of the circumplex model, a well-established system for describing emotional states (Posner et al., 2005) .",
"Machine learning approaches perform significantly better at quantifying emotion/sentiment from text compared to lexicon-based methods (Pang and Lee, 2008) .",
"Emotions are expressed at message-level.",
"Consequently, we trained a text classification model on 3,000 Facebook posts labeled by affect and intensity using unigrams as features.",
"We applied this model on each user's posts and aggregated over them to obtain a user score for both dimensions.",
"Textual Features For our qualitative text analysis we extract textual features from all of a user's Twitter posts.",
"Traditional psychological studies use a closed-vocabulary approach to modelling text.",
"The most popular method is based on Linguistic Inquiry and Word Count (LIWC) .",
"In LIWC, psychological theory was used to build 64 different categories.",
"These include different parts-of-speech, topical categories and emotions.",
"Each user is thereby represented as a distribution over these categories.",
"We also use all frequent 1-3 grams (used by more than 10% of users in our dataset), where we use pointwise mutual information (PMI) to filter infrequent 2-3 grams.",
"For a better qualitative assessment and to reduce risk of overfitting, we use a set of topics as a form of dimensionality reduction.",
"We use the 2,000 clusters introduced in (Schwartz et al., 2013) obtained by applying Latent Dirichlet Allocation (Blei et al., 2003) , the most popular topic model, to a large set of Facebook posts.",
"Prediction In this section we present an analysis of the predictive power of inferred user-level features.",
"We use the methods introduced in Section 3 to predict nine user level scores: age, gender, affect, intensity and the Big Five personality traits.",
"The three populations in our dataset are used to formulate three binary classification problems in order to analyse specific pairwise group peculiarities.",
"Users having both PTSD and depression are held-out when classifying between these two classes.",
"To assess the power of our text-derived features, we use as features broader textual features such as the LIWC categories, the LDA inferred topics and frequent 1-3 grams.",
"We train binary logistic regression classifiers (Pedregosa et al., 2011) with Elastic Net regularisation (Zou and Hastie, 2005) .",
"In Table 1 we report the performance using 10-fold cross-validation.",
"Performance is measured using ROC area under the curve (ROC AUC), an adequate measure when the classes are imbalanced.",
"A more thorough study of predictive performance for identifying PTSD and depressed users is presented in (Preoţiuc-Pietro et al., 2015) .",
"Our results show the following: • Age alone improves over chance and is highly predictive when classifying PTSD users.",
"To visualise the effect of age, Figure 1 shows the probability density function in our three populations.",
"This highlights that PTSD users are consistently predicted older than both controls and depressed users.",
"This is in line with findings from the National Comorbidity Survey and replications (Kessler et al., 2005a ; Kessler et al., Figure 1 : Age density functions for each group.",
"• Gender is only weakly predictive of any mental illness, although significantly above chance in depressed vs. controls (p < .01, DeLong test 1 ).",
"Interestingly, in this task age and gender combined improve significantly above each individual prediction, illustrating they contain complementary information.",
"Consequently, at least when analysing depression, gender should be accounted for in addition to age.",
"• Personality alone obtains very good predictive accuracies, reaching over .8 ROC AUC for classifying depressed vs. PTSD.",
"In general, personality features alone perform with strong predictive accuracy, within .1 of >5000 unigram features or 2000 topics.",
"Adding age and gender information further improves predictive power (C-P p < .01, D-P p < .01, DeLong test) when PTSD is one of the compared groups.",
"In Figure 2 we show the mean personality scores across the three groups.",
"In this dataset, PTSD users score highest on average in openness with depressed users scoring lowest.",
"However, neuroticism is the largest separator between mentally ill users and the controls, with depressed having slightly higher levels of neuroticism than PTSD.",
"Neuroticism alone has an ROC AUC of .732 in prediction depression vs. control and .674 in predicting PTSD vs. control.",
"Controls score higher on extraversion, a trait related to the frequency and intensity of positive emotions (Smillie et al., 2012) .",
"Controlling for age (Figure 2b ) significantly reduces the initial association between PTSD and higher conscientiousness, because PTSD users are likely to be older, and conscientiousness tends to increase with age (Soto et al., 2011) .",
"After controlling, depressed users score lowest on conscientiousness, while PTSD and controls are close to each other.",
"• Average affect and intensity achieve modest predictive performance, although significant (C-D p < .001, D-P p < .001, DeLong test) when one of the compared groups are depressed.",
"We use the two features to map users to the emotion circumplex in Figure 3 .",
"On average, control users expressed both higher intensity and higher (i.e.",
"more positive) affect, while depressed users were lowest on both.",
"This is consistent with the lowered (i.e.",
"more negative) affect typically seen in both PTSD and depressed patients, and the increased intensity/arousal among PTSD users may correspond to more frequent expressions of anxiety, which is characterized by high arousal and lower/negative affect (American Psychiatric Association, 2013).",
"• Textual features obtain high predictive performance.",
"Out of these, LIWC performs the worst, while the topics, unigrams and 1-3 grams have similarly high performance.",
"In addition to ROC AUC scores, we present ROC curves for all three binary prediction tasks in Figures 4a, 4b and 4c .",
"ROC curves are specifically useful for medical practitioners because the classification threshold can be adjusted to choose an applicationappropriate level of false positives.",
"For comparison, we display methods using only age and gender; age, gender and personality combined, as well as LIWC and the LDA topics.",
"For classifying depressed users from controls, a true positive rate of ∼ 0.6 can be achieved at a false positive rate of ∼ 0.2 using personality, age and gender alone, with an increase to up to ∼ 0.7 when PTSD users are one of the groups.",
"When classifying PTSD users, age is the most important factor.",
"Separating between depressed and PTSD is almost exclusively a factor of age.",
"This suggests that a application in a real life scenario will likely overpredict older users to have PTSD.",
"Language Analysis The very high predictive power of the user-level features and textual features motivates us to analyse the linguistic features associated with each group, taking into account age and gender.",
"We study differences in language between groups using differential language analysis -DLA (Schwartz et al., 2013) .",
"This method aims to find all the most discriminative features between two groups by correlating each individual feature (1-3 gram or topic) to the class label.",
"In our case, age and gender are included as covariates in order to control for the effect they may have on the outcome.",
"Since a large number of features are explored, we consider coefficients significant if they meet a Bonferroni-corrected two-tailed p-value of less than 0.001.",
"Language of Depression The word cloud in Figure 5a displays the 1-3 grams that most distinguish the depressed users from the set of control users.",
"Many features show face validity (e.g.",
"'depressed'), but also appear to represent a number of the cognitive and emotional processes implicated in depression in the literature (American Psychiatric Association, 2013).",
"1-3 grams seem to disclose information relating to illness and illness management (e.g.",
"'depressed', 'illness', 'meds', 'pills', 'therapy').",
"In some of the most strongly correlated features we also observe an increased focus on the self (e.g.",
"'I', 'I am', 'I have', 'I haven't', 'I was', 'myself') which has been found to accompany depression in many studies and often accompanies states of psychological distress (Rude et al., 2004; Stirman and Pennebaker, 2001; Bucci and Freedman, 1981) .",
"Depression classically relies on the presence of two sets of core symptoms: sustained periods of low mood (dysphoria) and low interest (anhedonia) (American Psychiatric Association, 2013) .",
"Phrases such as 'cry' and 'crying' suggest low mood, while 'anymore' and 'I used to' may suggest a discontinuation of activities.",
"Suicidal ideations or more general thoughts of death and dying are symptoms used in the diagnosis of depression, and even though they are relatively rarely mentioned (grey color), are identified in the differential language analysis (e.g.",
"'suicide', 'to die').",
"Beyond what is generally thought of as the key symptoms of depression discussed above, the differential language analysis also suggests that anger and interpersonal hostility ('fucking') feature significantly in the language use of depressed users.",
"The 10 topics most associated with depression (correlation values ranging from R = .282 to R = .229) suggest similar themes, including dysphoria (e.g.",
"'lonely', 'sad', 'crying' -Figures 6b, 6c, 6f ) and thoughts of death (e.g.",
"'suicide' - Figure 6h ).",
"Figure 5 : The word clouds show the 1-3 grams most correlated with each group having a mental illness, with the set of control users serving as the contrastive set in both cases.",
"The size of the 1-3 gram is scaled by the correlation to binary depression label (point-biserial correlation).",
"The color indexes relative frequency, from grey (rarely used) through blue (moderately used) to red (frequently used).",
"Correlations are controlled for age and gender.",
"Language of PTSD The word cloud in Figure 5b and topic clouds in Figure 7 display the 1-3 grams and topics most correlated with PTSD, with topic correlation values ranging from R = .280 to R = .237.",
"On the whole, the language most predictive of PTSD does not map as cleanly onto the symptoms and criteria for diagnosis of PTSD as was the case with depression.",
"Across topics and 1-3 grams, the language most correlated with PTSD suggests 'depression', disease management (e.g.",
"'pain', 'pills', 'meds' - Figure 7c ) and a focus on the self (e.g.",
"'I had', 'I was', 'I am', 'I would').",
"Similarly, language is suggestive of death (e.g.",
"'suicide', 'suicidal').",
"Compared to the language of depressed users, themes within the language of users with PTSD appear to reference traumatic experiences that are required for a diagnosis of PTSD (e.g.",
"'murdered', 'died'), as well as the resultant states of fear-like psychological distress (e.g.",
"'terrified', 'anxiety').",
"PTSD and Depression From our predictive experiments and Figure 4c , we see that language-predicted age almost completely differentiates between PTSD and depressed users.",
"Consequently, we find only a few features that distinguish between the two groups when controlling for age.",
"To visualise differences between the diseases we visualize topic usage in both groups in Figure 8 .",
"This shows standardised usage in both groups for each topic.",
"As an additional factor (color), we include Figure 6 : The LDA topics most correlated with depression controlling for age and gender, with the set of control users serving as the contrastive set.",
"Word size is proportional to the probability of the word within the topics.",
"Color is for display only.",
"Figure 7 : The LDA topics most correlated with PTSD controlling for age and gender, with the set of control users serving as the contrastive set.",
"Word size is proportional to the probability of the word within the topics.",
"Color is for display only.",
"the personality trait of neuroticism.",
"This plays the most important role in separating between mentally ill users and controls.",
"The topics marked by arrows in Figure 8 are some of the topics most used by users with depression and PTSD shown above in Figures 6-7 .",
"Of the three topics, the topic shown in Figure 6h has 'suicide' as the most prevalent word.",
"This topic's use is elevated for both depression and PTSD.",
"Figure 6f shows a topic used mostly by depressed users, while Figure 7c highlights a topic used mainly by users with PTSD.",
"Related Work Prior studies have similarly examined the efficacy of utilising social media data, like Facebook and Twitter, to ascertain the presence of both depression and PTSD.",
"For instance, Coppersmith et al.",
"(2014b) analyse differences in patterns of language use.",
"They report that individuals with PTSD were significantly more likely to use third person pronouns and significantly less likely to use second person pronouns, without mentioning differences in the use of first person pronouns.",
"This is in contrast to the strong differences in first person pronoun use among depressed individuals documented in the literature ( Rude et al., 2004; Stirman and Pennebaker, 2001) , confirmed in prior Twitter studies (Coppersmith et al., 2014a; De Choudhury et al., 2013) and replicated here.",
"De Choudhury et al.",
"(2013) explore the relationships between social media postings and depressive status, finding that geographic variables can alter one's risk.",
"They show that cities for which the highest numbers of depressive Twitter users are predicted correlate with the cities with the known highest depression rates nationwide; depressive tweets follow an expected diurnal and annual rhythm (peaking at night and during winter); and women exhibit an increased risk of depression relative to men, consistent with known psychological trends.",
"These studies thus demonstrate the utility of using social media outlets to capture nuanced data about an individual's daily psychological affect to predict pathology, and suggest that geographic and demographic factors may alter the prevalence of psychological ill-being.",
"The present study is unique in its efforts to control for some of these demographic factors, such as personality and age, that demonstrably influence an individual's pattern of language use.",
"Further, these demographic characteristics are known to significantly alter patterns e.g.",
"pronoun use (Pennebaker, 2011) .",
"This highlights the utility of controlling for these factors when analysing pathological states like depression or PTSD.",
"Conclusions This study presented a qualitative analysis of mental illness language use in users who disclosed their diagnoses.",
"For users diagnosed with depression or PTSD, we have identified both symptoms and effects of their mental condition from user-generated content.",
"The majority of our results map to clinical theory, confirming the validity of our methodology and the relevance of the dataset.",
"In our experiments, we accounted for text-derived user features, such as demographics (e.g.",
"age, gender) and personality.",
"Text-derived personality alone showed high predictive performance, in one case reaching similar performance to using orders of magnitude more textual features.",
"Our study further demonstrated the potential for using social media as a means for predicting and analysing the linguistic markers of mental illnesses.",
"However, it also raises a few questions.",
"First, although apparently easily predictable, the difference between depressed and PTSD users is largely only due to predicted age.",
"Sample demographics also appear to be different than the general population, making predictive models fitted on this data to be susceptible to over-predicting certain demographics.",
"Secondly, the language associated with a selfreported diagnosis of depression and PTSD has a large overlap with the language predictive of personality.",
"This suggests that personality may be explanatory of a particular kind of behavior: posting about mental illness diagnoses online.",
"The mental illness labels thus acquired likely have personality confounds 'baked into them', stressing the need for using stronger ground truth such as given by clinicians.",
"Further, based on the scope of the applicationswhether screening or analysis of psychological risk factors -user-generated data should at minimum be temporally partitioned to encompass content shared before and after the diagnosis.",
"This allows one to separate mentions of symptoms from discussions of and consequences of their diagnosis, such as the use of medications.",
"28"
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"5",
"5.1",
"5.2",
"5.3",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Data",
"Features",
"Age, Gender and Personality",
"Affect and Intensity",
"Textual Features",
"Prediction",
"Language Analysis",
"Language of Depression",
"Language of PTSD",
"PTSD and Depression",
"Related Work",
"Conclusions"
]
} | GEM-SciDuet-train-27#paper-1028#slide-7 | Liwc | standard psychologically inspired dictionaries
64 categories such as:
parts-of-speech topical categories emotions
standard baseline for open vocabulary approaches | standard psychologically inspired dictionaries
64 categories such as:
parts-of-speech topical categories emotions
standard baseline for open vocabulary approaches | [] |
GEM-SciDuet-train-27#paper-1028#slide-8 | 1028 | Proceedings of the 2nd Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal The Role of Personality, Age and Gender in Tweeting about Mental Illnesses | Mental illnesses, such as depression and post traumatic stress disorder (PTSD), are highly underdiagnosed globally. Populations sharing similar demographics and personality traits are known to be more at risk than others. In this study, we characterise the language use of users disclosing their mental illness on Twitter. Language-derived personality and demographic estimates show surprisingly strong performance in distinguishing users that tweet a diagnosis of depression or PTSD from random controls, reaching an area under the receiveroperating characteristic curve -AUC -of around .8 in all our binary classification tasks. In fact, when distinguishing users disclosing depression from those disclosing PTSD, the single feature of estimated age shows nearly as strong performance (AUC = .806) as using thousands of topics (AUC = .819) or tens of thousands of n-grams (AUC = .812). We also find that differential language analyses, controlled for demographics, recover many symptoms associated with the mental illnesses in the clinical literature. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169
],
"paper_content_text": [
"Introduction Mental illnesses, such as depression and post traumatic stress disorder (PTSD) represent a large share of the global burden of disease (Üstün et al., 2004; Mathers and Loncar, 2006) , but are underdiagnosed and undertreated around the world (Prince et al., 2007) .",
"Previous research has demonstrated the important role of demographic factors in depression risk.",
"For example, while clinically-assessed depression is estimated at 6.6% in a 12-month interval for U.S. adults , the prevalence in males is 3-5%, while the prevalence is 8-10% in females (Andrade et al., 2003) .",
"Similarly, prevalence of PTSD among U.S. adults in any 12-month period is estimated at 3.5% (Kessler et al., 2005b ) -1.8% in males and 5.2% in females -yet this risk is not distributed evenly across age groups; prevalence of PTSD increases throughout the majority of the lifespan to reach a peak of 9.2% between the ages of 49-59, before dropping sharply to 2.5% past the age of 60.",
"(Kessler et al., 2005a) .",
"Large scale user-generated content provides the opportunity to extract information not only about events, but also about the person posting them.",
"Using automatic methods, a wide set of user characteristics, such as age, gender, personality, location and income have been shown to be predictable from shared social media text.",
"The same holds for mental illnesses, from users expressing symptoms of their illness (e.g.",
"low mood, focus on the self, high anxiety) to talking about effects of their illness (e.g.",
"mentioning medications and therapy) and to even self-disclosing the illness.",
"This study represents an analysis of language use in users who share their mental illness though social media, in this case depression and PTSD.",
"We advocate adjusting for important underlying demographic factors, such as age and gender, to avoid confounding by language specific to these underlying characteristics.",
"The age and gender trends from the U.S. population are present in our dataset, although imperfectly, given the biases of self-reports and social media sampling.",
"Our differential language analyses show symptoms associated with these illnesses congruent with existing clinical theory and consequences of diagnoses.",
"In addition to age and gender, we focus on the important role of inferred personality in predicting 21 mental illness.",
"We show that a model which uses only the text-predicted user level 'Big Five' personality dimensions plus age and gender perform with high accuracy, comparable to methods that use standard dictionaries of psychology as features.",
"Users who self-report a diagnosis appear more neurotic and more introverted when compared to average users.",
"Data We use a dataset of Twitter users reported to suffer from a mental illness, specifically depression and post traumatic stress disorder (PTSD).",
"This dataset was first introduced in (Coppersmith et al., 2014a) .",
"The self-reports are collected by searching a large Twitter archive for disclosures using a regular expression (e.g.",
"'I have been diagnosed with depression').",
"Candidate users were filtered manually and then all their most recent tweets have been continuously crawled using the Twitter Search API.",
"The selfdisclosure messages were excluded from the dataset and from the estimation of user inferred demographics and personality scores.",
"The control users were selected at random from Twitter.",
"In total there are 370 users diagnosed only with PTSD, 483 only with depression and 1104 control users.",
"On average, each user has 3400.8 messages.",
"As Coppersmith et al.",
"(2014b) acknowledge, this method of collection is susceptible to multiple biases, but represents a simple way to build a large dataset of users and their textual information.",
"Features We use the Twitter posts of a user to infer several user traits which we expect to be relevant to mental illnesses based on standard clinical criteria (American Psychiatric Association, 2013).",
"Recently, automatic user profiling methods have used on usergenerated text and complementary features in order to predict different user traits such as: age (Nguyen et al., 2011) , gender (Sap et al., 2014) , location (Cheng et al., 2010) , impact (Lampos et al., 2014) , political preference (Volkova et al., 2014) , temporal orientation or personality (Schwartz et al., 2013) .",
"Age, Gender and Personality We use the methods developed in (Schwartz et al., 2013) to assign each user scores for age, gender and personality from the popular five factor model of personality -'Big Five ' -(McCrae and John, 1992) , which consists of five dimensions: extraversion, agreeableness, conscientiousness, neuroticism and openness to experience.",
"The model was trained on a large sample of around 70,000 Facebook users who have taken Big Five personality tests and shared their posts using a model using 1-3 grams and topics as features Schwartz et al., 2013) .",
"This model achieves R > .3 predictive performance for all five traits.",
"This dataset is also used to obtain age and gender adjusted personality and topic distributions.",
"Affect and Intensity Emotions play an important role in the diagnosis of mental illness (American Psychiatric Association, 2013) .",
"We aim to capture the expression of users' emotions through their generated posts.",
"We characterize expressions along the dimensions of affect (from positive to negative) and intensity (from low to high), which correspond to the two primary axes of the circumplex model, a well-established system for describing emotional states (Posner et al., 2005) .",
"Machine learning approaches perform significantly better at quantifying emotion/sentiment from text compared to lexicon-based methods (Pang and Lee, 2008) .",
"Emotions are expressed at message-level.",
"Consequently, we trained a text classification model on 3,000 Facebook posts labeled by affect and intensity using unigrams as features.",
"We applied this model on each user's posts and aggregated over them to obtain a user score for both dimensions.",
"Textual Features For our qualitative text analysis we extract textual features from all of a user's Twitter posts.",
"Traditional psychological studies use a closed-vocabulary approach to modelling text.",
"The most popular method is based on Linguistic Inquiry and Word Count (LIWC) .",
"In LIWC, psychological theory was used to build 64 different categories.",
"These include different parts-of-speech, topical categories and emotions.",
"Each user is thereby represented as a distribution over these categories.",
"We also use all frequent 1-3 grams (used by more than 10% of users in our dataset), where we use pointwise mutual information (PMI) to filter infrequent 2-3 grams.",
"For a better qualitative assessment and to reduce risk of overfitting, we use a set of topics as a form of dimensionality reduction.",
"We use the 2,000 clusters introduced in (Schwartz et al., 2013) obtained by applying Latent Dirichlet Allocation (Blei et al., 2003) , the most popular topic model, to a large set of Facebook posts.",
"Prediction In this section we present an analysis of the predictive power of inferred user-level features.",
"We use the methods introduced in Section 3 to predict nine user level scores: age, gender, affect, intensity and the Big Five personality traits.",
"The three populations in our dataset are used to formulate three binary classification problems in order to analyse specific pairwise group peculiarities.",
"Users having both PTSD and depression are held-out when classifying between these two classes.",
"To assess the power of our text-derived features, we use as features broader textual features such as the LIWC categories, the LDA inferred topics and frequent 1-3 grams.",
"We train binary logistic regression classifiers (Pedregosa et al., 2011) with Elastic Net regularisation (Zou and Hastie, 2005) .",
"In Table 1 we report the performance using 10-fold cross-validation.",
"Performance is measured using ROC area under the curve (ROC AUC), an adequate measure when the classes are imbalanced.",
"A more thorough study of predictive performance for identifying PTSD and depressed users is presented in (Preoţiuc-Pietro et al., 2015) .",
"Our results show the following: • Age alone improves over chance and is highly predictive when classifying PTSD users.",
"To visualise the effect of age, Figure 1 shows the probability density function in our three populations.",
"This highlights that PTSD users are consistently predicted older than both controls and depressed users.",
"This is in line with findings from the National Comorbidity Survey and replications (Kessler et al., 2005a ; Kessler et al., Figure 1 : Age density functions for each group.",
"• Gender is only weakly predictive of any mental illness, although significantly above chance in depressed vs. controls (p < .01, DeLong test 1 ).",
"Interestingly, in this task age and gender combined improve significantly above each individual prediction, illustrating they contain complementary information.",
"Consequently, at least when analysing depression, gender should be accounted for in addition to age.",
"• Personality alone obtains very good predictive accuracies, reaching over .8 ROC AUC for classifying depressed vs. PTSD.",
"In general, personality features alone perform with strong predictive accuracy, within .1 of >5000 unigram features or 2000 topics.",
"Adding age and gender information further improves predictive power (C-P p < .01, D-P p < .01, DeLong test) when PTSD is one of the compared groups.",
"In Figure 2 we show the mean personality scores across the three groups.",
"In this dataset, PTSD users score highest on average in openness with depressed users scoring lowest.",
"However, neuroticism is the largest separator between mentally ill users and the controls, with depressed having slightly higher levels of neuroticism than PTSD.",
"Neuroticism alone has an ROC AUC of .732 in prediction depression vs. control and .674 in predicting PTSD vs. control.",
"Controls score higher on extraversion, a trait related to the frequency and intensity of positive emotions (Smillie et al., 2012) .",
"Controlling for age (Figure 2b ) significantly reduces the initial association between PTSD and higher conscientiousness, because PTSD users are likely to be older, and conscientiousness tends to increase with age (Soto et al., 2011) .",
"After controlling, depressed users score lowest on conscientiousness, while PTSD and controls are close to each other.",
"• Average affect and intensity achieve modest predictive performance, although significant (C-D p < .001, D-P p < .001, DeLong test) when one of the compared groups are depressed.",
"We use the two features to map users to the emotion circumplex in Figure 3 .",
"On average, control users expressed both higher intensity and higher (i.e.",
"more positive) affect, while depressed users were lowest on both.",
"This is consistent with the lowered (i.e.",
"more negative) affect typically seen in both PTSD and depressed patients, and the increased intensity/arousal among PTSD users may correspond to more frequent expressions of anxiety, which is characterized by high arousal and lower/negative affect (American Psychiatric Association, 2013).",
"• Textual features obtain high predictive performance.",
"Out of these, LIWC performs the worst, while the topics, unigrams and 1-3 grams have similarly high performance.",
"In addition to ROC AUC scores, we present ROC curves for all three binary prediction tasks in Figures 4a, 4b and 4c .",
"ROC curves are specifically useful for medical practitioners because the classification threshold can be adjusted to choose an applicationappropriate level of false positives.",
"For comparison, we display methods using only age and gender; age, gender and personality combined, as well as LIWC and the LDA topics.",
"For classifying depressed users from controls, a true positive rate of ∼ 0.6 can be achieved at a false positive rate of ∼ 0.2 using personality, age and gender alone, with an increase to up to ∼ 0.7 when PTSD users are one of the groups.",
"When classifying PTSD users, age is the most important factor.",
"Separating between depressed and PTSD is almost exclusively a factor of age.",
"This suggests that a application in a real life scenario will likely overpredict older users to have PTSD.",
"Language Analysis The very high predictive power of the user-level features and textual features motivates us to analyse the linguistic features associated with each group, taking into account age and gender.",
"We study differences in language between groups using differential language analysis -DLA (Schwartz et al., 2013) .",
"This method aims to find all the most discriminative features between two groups by correlating each individual feature (1-3 gram or topic) to the class label.",
"In our case, age and gender are included as covariates in order to control for the effect they may have on the outcome.",
"Since a large number of features are explored, we consider coefficients significant if they meet a Bonferroni-corrected two-tailed p-value of less than 0.001.",
"Language of Depression The word cloud in Figure 5a displays the 1-3 grams that most distinguish the depressed users from the set of control users.",
"Many features show face validity (e.g.",
"'depressed'), but also appear to represent a number of the cognitive and emotional processes implicated in depression in the literature (American Psychiatric Association, 2013).",
"1-3 grams seem to disclose information relating to illness and illness management (e.g.",
"'depressed', 'illness', 'meds', 'pills', 'therapy').",
"In some of the most strongly correlated features we also observe an increased focus on the self (e.g.",
"'I', 'I am', 'I have', 'I haven't', 'I was', 'myself') which has been found to accompany depression in many studies and often accompanies states of psychological distress (Rude et al., 2004; Stirman and Pennebaker, 2001; Bucci and Freedman, 1981) .",
"Depression classically relies on the presence of two sets of core symptoms: sustained periods of low mood (dysphoria) and low interest (anhedonia) (American Psychiatric Association, 2013) .",
"Phrases such as 'cry' and 'crying' suggest low mood, while 'anymore' and 'I used to' may suggest a discontinuation of activities.",
"Suicidal ideations or more general thoughts of death and dying are symptoms used in the diagnosis of depression, and even though they are relatively rarely mentioned (grey color), are identified in the differential language analysis (e.g.",
"'suicide', 'to die').",
"Beyond what is generally thought of as the key symptoms of depression discussed above, the differential language analysis also suggests that anger and interpersonal hostility ('fucking') feature significantly in the language use of depressed users.",
"The 10 topics most associated with depression (correlation values ranging from R = .282 to R = .229) suggest similar themes, including dysphoria (e.g.",
"'lonely', 'sad', 'crying' -Figures 6b, 6c, 6f ) and thoughts of death (e.g.",
"'suicide' - Figure 6h ).",
"Figure 5 : The word clouds show the 1-3 grams most correlated with each group having a mental illness, with the set of control users serving as the contrastive set in both cases.",
"The size of the 1-3 gram is scaled by the correlation to binary depression label (point-biserial correlation).",
"The color indexes relative frequency, from grey (rarely used) through blue (moderately used) to red (frequently used).",
"Correlations are controlled for age and gender.",
"Language of PTSD The word cloud in Figure 5b and topic clouds in Figure 7 display the 1-3 grams and topics most correlated with PTSD, with topic correlation values ranging from R = .280 to R = .237.",
"On the whole, the language most predictive of PTSD does not map as cleanly onto the symptoms and criteria for diagnosis of PTSD as was the case with depression.",
"Across topics and 1-3 grams, the language most correlated with PTSD suggests 'depression', disease management (e.g.",
"'pain', 'pills', 'meds' - Figure 7c ) and a focus on the self (e.g.",
"'I had', 'I was', 'I am', 'I would').",
"Similarly, language is suggestive of death (e.g.",
"'suicide', 'suicidal').",
"Compared to the language of depressed users, themes within the language of users with PTSD appear to reference traumatic experiences that are required for a diagnosis of PTSD (e.g.",
"'murdered', 'died'), as well as the resultant states of fear-like psychological distress (e.g.",
"'terrified', 'anxiety').",
"PTSD and Depression From our predictive experiments and Figure 4c , we see that language-predicted age almost completely differentiates between PTSD and depressed users.",
"Consequently, we find only a few features that distinguish between the two groups when controlling for age.",
"To visualise differences between the diseases we visualize topic usage in both groups in Figure 8 .",
"This shows standardised usage in both groups for each topic.",
"As an additional factor (color), we include Figure 6 : The LDA topics most correlated with depression controlling for age and gender, with the set of control users serving as the contrastive set.",
"Word size is proportional to the probability of the word within the topics.",
"Color is for display only.",
"Figure 7 : The LDA topics most correlated with PTSD controlling for age and gender, with the set of control users serving as the contrastive set.",
"Word size is proportional to the probability of the word within the topics.",
"Color is for display only.",
"the personality trait of neuroticism.",
"This plays the most important role in separating between mentally ill users and controls.",
"The topics marked by arrows in Figure 8 are some of the topics most used by users with depression and PTSD shown above in Figures 6-7 .",
"Of the three topics, the topic shown in Figure 6h has 'suicide' as the most prevalent word.",
"This topic's use is elevated for both depression and PTSD.",
"Figure 6f shows a topic used mostly by depressed users, while Figure 7c highlights a topic used mainly by users with PTSD.",
"Related Work Prior studies have similarly examined the efficacy of utilising social media data, like Facebook and Twitter, to ascertain the presence of both depression and PTSD.",
"For instance, Coppersmith et al.",
"(2014b) analyse differences in patterns of language use.",
"They report that individuals with PTSD were significantly more likely to use third person pronouns and significantly less likely to use second person pronouns, without mentioning differences in the use of first person pronouns.",
"This is in contrast to the strong differences in first person pronoun use among depressed individuals documented in the literature ( Rude et al., 2004; Stirman and Pennebaker, 2001) , confirmed in prior Twitter studies (Coppersmith et al., 2014a; De Choudhury et al., 2013) and replicated here.",
"De Choudhury et al.",
"(2013) explore the relationships between social media postings and depressive status, finding that geographic variables can alter one's risk.",
"They show that cities for which the highest numbers of depressive Twitter users are predicted correlate with the cities with the known highest depression rates nationwide; depressive tweets follow an expected diurnal and annual rhythm (peaking at night and during winter); and women exhibit an increased risk of depression relative to men, consistent with known psychological trends.",
"These studies thus demonstrate the utility of using social media outlets to capture nuanced data about an individual's daily psychological affect to predict pathology, and suggest that geographic and demographic factors may alter the prevalence of psychological ill-being.",
"The present study is unique in its efforts to control for some of these demographic factors, such as personality and age, that demonstrably influence an individual's pattern of language use.",
"Further, these demographic characteristics are known to significantly alter patterns e.g.",
"pronoun use (Pennebaker, 2011) .",
"This highlights the utility of controlling for these factors when analysing pathological states like depression or PTSD.",
"Conclusions This study presented a qualitative analysis of mental illness language use in users who disclosed their diagnoses.",
"For users diagnosed with depression or PTSD, we have identified both symptoms and effects of their mental condition from user-generated content.",
"The majority of our results map to clinical theory, confirming the validity of our methodology and the relevance of the dataset.",
"In our experiments, we accounted for text-derived user features, such as demographics (e.g.",
"age, gender) and personality.",
"Text-derived personality alone showed high predictive performance, in one case reaching similar performance to using orders of magnitude more textual features.",
"Our study further demonstrated the potential for using social media as a means for predicting and analysing the linguistic markers of mental illnesses.",
"However, it also raises a few questions.",
"First, although apparently easily predictable, the difference between depressed and PTSD users is largely only due to predicted age.",
"Sample demographics also appear to be different than the general population, making predictive models fitted on this data to be susceptible to over-predicting certain demographics.",
"Secondly, the language associated with a selfreported diagnosis of depression and PTSD has a large overlap with the language predictive of personality.",
"This suggests that personality may be explanatory of a particular kind of behavior: posting about mental illness diagnoses online.",
"The mental illness labels thus acquired likely have personality confounds 'baked into them', stressing the need for using stronger ground truth such as given by clinicians.",
"Further, based on the scope of the applicationswhether screening or analysis of psychological risk factors -user-generated data should at minimum be temporally partitioned to encompass content shared before and after the diagnosis.",
"This allows one to separate mentions of symptoms from discussions of and consequences of their diagnosis, such as the use of medications.",
"28"
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"5",
"5.1",
"5.2",
"5.3",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Data",
"Features",
"Age, Gender and Personality",
"Affect and Intensity",
"Textual Features",
"Prediction",
"Language Analysis",
"Language of Depression",
"Language of PTSD",
"PTSD and Depression",
"Related Work",
"Conclusions"
]
} | GEM-SciDuet-train-27#paper-1028#slide-8 | Topics | Latent Dirichlet Allocation (LDA) underlying set of Facebook statues
(same data as personality model)
2000 topics in total
7 features 64 features 2000 features | Latent Dirichlet Allocation (LDA) underlying set of Facebook statues
(same data as personality model)
2000 topics in total
7 features 64 features 2000 features | [] |
GEM-SciDuet-train-27#paper-1028#slide-9 | 1028 | Proceedings of the 2nd Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal The Role of Personality, Age and Gender in Tweeting about Mental Illnesses | Mental illnesses, such as depression and post traumatic stress disorder (PTSD), are highly underdiagnosed globally. Populations sharing similar demographics and personality traits are known to be more at risk than others. In this study, we characterise the language use of users disclosing their mental illness on Twitter. Language-derived personality and demographic estimates show surprisingly strong performance in distinguishing users that tweet a diagnosis of depression or PTSD from random controls, reaching an area under the receiveroperating characteristic curve -AUC -of around .8 in all our binary classification tasks. In fact, when distinguishing users disclosing depression from those disclosing PTSD, the single feature of estimated age shows nearly as strong performance (AUC = .806) as using thousands of topics (AUC = .819) or tens of thousands of n-grams (AUC = .812). We also find that differential language analyses, controlled for demographics, recover many symptoms associated with the mental illnesses in the clinical literature. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169
],
"paper_content_text": [
"Introduction Mental illnesses, such as depression and post traumatic stress disorder (PTSD) represent a large share of the global burden of disease (Üstün et al., 2004; Mathers and Loncar, 2006) , but are underdiagnosed and undertreated around the world (Prince et al., 2007) .",
"Previous research has demonstrated the important role of demographic factors in depression risk.",
"For example, while clinically-assessed depression is estimated at 6.6% in a 12-month interval for U.S. adults , the prevalence in males is 3-5%, while the prevalence is 8-10% in females (Andrade et al., 2003) .",
"Similarly, prevalence of PTSD among U.S. adults in any 12-month period is estimated at 3.5% (Kessler et al., 2005b ) -1.8% in males and 5.2% in females -yet this risk is not distributed evenly across age groups; prevalence of PTSD increases throughout the majority of the lifespan to reach a peak of 9.2% between the ages of 49-59, before dropping sharply to 2.5% past the age of 60.",
"(Kessler et al., 2005a) .",
"Large scale user-generated content provides the opportunity to extract information not only about events, but also about the person posting them.",
"Using automatic methods, a wide set of user characteristics, such as age, gender, personality, location and income have been shown to be predictable from shared social media text.",
"The same holds for mental illnesses, from users expressing symptoms of their illness (e.g.",
"low mood, focus on the self, high anxiety) to talking about effects of their illness (e.g.",
"mentioning medications and therapy) and to even self-disclosing the illness.",
"This study represents an analysis of language use in users who share their mental illness though social media, in this case depression and PTSD.",
"We advocate adjusting for important underlying demographic factors, such as age and gender, to avoid confounding by language specific to these underlying characteristics.",
"The age and gender trends from the U.S. population are present in our dataset, although imperfectly, given the biases of self-reports and social media sampling.",
"Our differential language analyses show symptoms associated with these illnesses congruent with existing clinical theory and consequences of diagnoses.",
"In addition to age and gender, we focus on the important role of inferred personality in predicting 21 mental illness.",
"We show that a model which uses only the text-predicted user level 'Big Five' personality dimensions plus age and gender perform with high accuracy, comparable to methods that use standard dictionaries of psychology as features.",
"Users who self-report a diagnosis appear more neurotic and more introverted when compared to average users.",
"Data We use a dataset of Twitter users reported to suffer from a mental illness, specifically depression and post traumatic stress disorder (PTSD).",
"This dataset was first introduced in (Coppersmith et al., 2014a) .",
"The self-reports are collected by searching a large Twitter archive for disclosures using a regular expression (e.g.",
"'I have been diagnosed with depression').",
"Candidate users were filtered manually and then all their most recent tweets have been continuously crawled using the Twitter Search API.",
"The selfdisclosure messages were excluded from the dataset and from the estimation of user inferred demographics and personality scores.",
"The control users were selected at random from Twitter.",
"In total there are 370 users diagnosed only with PTSD, 483 only with depression and 1104 control users.",
"On average, each user has 3400.8 messages.",
"As Coppersmith et al.",
"(2014b) acknowledge, this method of collection is susceptible to multiple biases, but represents a simple way to build a large dataset of users and their textual information.",
"Features We use the Twitter posts of a user to infer several user traits which we expect to be relevant to mental illnesses based on standard clinical criteria (American Psychiatric Association, 2013).",
"Recently, automatic user profiling methods have used on usergenerated text and complementary features in order to predict different user traits such as: age (Nguyen et al., 2011) , gender (Sap et al., 2014) , location (Cheng et al., 2010) , impact (Lampos et al., 2014) , political preference (Volkova et al., 2014) , temporal orientation or personality (Schwartz et al., 2013) .",
"Age, Gender and Personality We use the methods developed in (Schwartz et al., 2013) to assign each user scores for age, gender and personality from the popular five factor model of personality -'Big Five ' -(McCrae and John, 1992) , which consists of five dimensions: extraversion, agreeableness, conscientiousness, neuroticism and openness to experience.",
"The model was trained on a large sample of around 70,000 Facebook users who have taken Big Five personality tests and shared their posts using a model using 1-3 grams and topics as features Schwartz et al., 2013) .",
"This model achieves R > .3 predictive performance for all five traits.",
"This dataset is also used to obtain age and gender adjusted personality and topic distributions.",
"Affect and Intensity Emotions play an important role in the diagnosis of mental illness (American Psychiatric Association, 2013) .",
"We aim to capture the expression of users' emotions through their generated posts.",
"We characterize expressions along the dimensions of affect (from positive to negative) and intensity (from low to high), which correspond to the two primary axes of the circumplex model, a well-established system for describing emotional states (Posner et al., 2005) .",
"Machine learning approaches perform significantly better at quantifying emotion/sentiment from text compared to lexicon-based methods (Pang and Lee, 2008) .",
"Emotions are expressed at message-level.",
"Consequently, we trained a text classification model on 3,000 Facebook posts labeled by affect and intensity using unigrams as features.",
"We applied this model on each user's posts and aggregated over them to obtain a user score for both dimensions.",
"Textual Features For our qualitative text analysis we extract textual features from all of a user's Twitter posts.",
"Traditional psychological studies use a closed-vocabulary approach to modelling text.",
"The most popular method is based on Linguistic Inquiry and Word Count (LIWC) .",
"In LIWC, psychological theory was used to build 64 different categories.",
"These include different parts-of-speech, topical categories and emotions.",
"Each user is thereby represented as a distribution over these categories.",
"We also use all frequent 1-3 grams (used by more than 10% of users in our dataset), where we use pointwise mutual information (PMI) to filter infrequent 2-3 grams.",
"For a better qualitative assessment and to reduce risk of overfitting, we use a set of topics as a form of dimensionality reduction.",
"We use the 2,000 clusters introduced in (Schwartz et al., 2013) obtained by applying Latent Dirichlet Allocation (Blei et al., 2003) , the most popular topic model, to a large set of Facebook posts.",
"Prediction In this section we present an analysis of the predictive power of inferred user-level features.",
"We use the methods introduced in Section 3 to predict nine user level scores: age, gender, affect, intensity and the Big Five personality traits.",
"The three populations in our dataset are used to formulate three binary classification problems in order to analyse specific pairwise group peculiarities.",
"Users having both PTSD and depression are held-out when classifying between these two classes.",
"To assess the power of our text-derived features, we use as features broader textual features such as the LIWC categories, the LDA inferred topics and frequent 1-3 grams.",
"We train binary logistic regression classifiers (Pedregosa et al., 2011) with Elastic Net regularisation (Zou and Hastie, 2005) .",
"In Table 1 we report the performance using 10-fold cross-validation.",
"Performance is measured using ROC area under the curve (ROC AUC), an adequate measure when the classes are imbalanced.",
"A more thorough study of predictive performance for identifying PTSD and depressed users is presented in (Preoţiuc-Pietro et al., 2015) .",
"Our results show the following: • Age alone improves over chance and is highly predictive when classifying PTSD users.",
"To visualise the effect of age, Figure 1 shows the probability density function in our three populations.",
"This highlights that PTSD users are consistently predicted older than both controls and depressed users.",
"This is in line with findings from the National Comorbidity Survey and replications (Kessler et al., 2005a ; Kessler et al., Figure 1 : Age density functions for each group.",
"• Gender is only weakly predictive of any mental illness, although significantly above chance in depressed vs. controls (p < .01, DeLong test 1 ).",
"Interestingly, in this task age and gender combined improve significantly above each individual prediction, illustrating they contain complementary information.",
"Consequently, at least when analysing depression, gender should be accounted for in addition to age.",
"• Personality alone obtains very good predictive accuracies, reaching over .8 ROC AUC for classifying depressed vs. PTSD.",
"In general, personality features alone perform with strong predictive accuracy, within .1 of >5000 unigram features or 2000 topics.",
"Adding age and gender information further improves predictive power (C-P p < .01, D-P p < .01, DeLong test) when PTSD is one of the compared groups.",
"In Figure 2 we show the mean personality scores across the three groups.",
"In this dataset, PTSD users score highest on average in openness with depressed users scoring lowest.",
"However, neuroticism is the largest separator between mentally ill users and the controls, with depressed having slightly higher levels of neuroticism than PTSD.",
"Neuroticism alone has an ROC AUC of .732 in prediction depression vs. control and .674 in predicting PTSD vs. control.",
"Controls score higher on extraversion, a trait related to the frequency and intensity of positive emotions (Smillie et al., 2012) .",
"Controlling for age (Figure 2b ) significantly reduces the initial association between PTSD and higher conscientiousness, because PTSD users are likely to be older, and conscientiousness tends to increase with age (Soto et al., 2011) .",
"After controlling, depressed users score lowest on conscientiousness, while PTSD and controls are close to each other.",
"• Average affect and intensity achieve modest predictive performance, although significant (C-D p < .001, D-P p < .001, DeLong test) when one of the compared groups are depressed.",
"We use the two features to map users to the emotion circumplex in Figure 3 .",
"On average, control users expressed both higher intensity and higher (i.e.",
"more positive) affect, while depressed users were lowest on both.",
"This is consistent with the lowered (i.e.",
"more negative) affect typically seen in both PTSD and depressed patients, and the increased intensity/arousal among PTSD users may correspond to more frequent expressions of anxiety, which is characterized by high arousal and lower/negative affect (American Psychiatric Association, 2013).",
"• Textual features obtain high predictive performance.",
"Out of these, LIWC performs the worst, while the topics, unigrams and 1-3 grams have similarly high performance.",
"In addition to ROC AUC scores, we present ROC curves for all three binary prediction tasks in Figures 4a, 4b and 4c .",
"ROC curves are specifically useful for medical practitioners because the classification threshold can be adjusted to choose an applicationappropriate level of false positives.",
"For comparison, we display methods using only age and gender; age, gender and personality combined, as well as LIWC and the LDA topics.",
"For classifying depressed users from controls, a true positive rate of ∼ 0.6 can be achieved at a false positive rate of ∼ 0.2 using personality, age and gender alone, with an increase to up to ∼ 0.7 when PTSD users are one of the groups.",
"When classifying PTSD users, age is the most important factor.",
"Separating between depressed and PTSD is almost exclusively a factor of age.",
"This suggests that a application in a real life scenario will likely overpredict older users to have PTSD.",
"Language Analysis The very high predictive power of the user-level features and textual features motivates us to analyse the linguistic features associated with each group, taking into account age and gender.",
"We study differences in language between groups using differential language analysis -DLA (Schwartz et al., 2013) .",
"This method aims to find all the most discriminative features between two groups by correlating each individual feature (1-3 gram or topic) to the class label.",
"In our case, age and gender are included as covariates in order to control for the effect they may have on the outcome.",
"Since a large number of features are explored, we consider coefficients significant if they meet a Bonferroni-corrected two-tailed p-value of less than 0.001.",
"Language of Depression The word cloud in Figure 5a displays the 1-3 grams that most distinguish the depressed users from the set of control users.",
"Many features show face validity (e.g.",
"'depressed'), but also appear to represent a number of the cognitive and emotional processes implicated in depression in the literature (American Psychiatric Association, 2013).",
"1-3 grams seem to disclose information relating to illness and illness management (e.g.",
"'depressed', 'illness', 'meds', 'pills', 'therapy').",
"In some of the most strongly correlated features we also observe an increased focus on the self (e.g.",
"'I', 'I am', 'I have', 'I haven't', 'I was', 'myself') which has been found to accompany depression in many studies and often accompanies states of psychological distress (Rude et al., 2004; Stirman and Pennebaker, 2001; Bucci and Freedman, 1981) .",
"Depression classically relies on the presence of two sets of core symptoms: sustained periods of low mood (dysphoria) and low interest (anhedonia) (American Psychiatric Association, 2013) .",
"Phrases such as 'cry' and 'crying' suggest low mood, while 'anymore' and 'I used to' may suggest a discontinuation of activities.",
"Suicidal ideations or more general thoughts of death and dying are symptoms used in the diagnosis of depression, and even though they are relatively rarely mentioned (grey color), are identified in the differential language analysis (e.g.",
"'suicide', 'to die').",
"Beyond what is generally thought of as the key symptoms of depression discussed above, the differential language analysis also suggests that anger and interpersonal hostility ('fucking') feature significantly in the language use of depressed users.",
"The 10 topics most associated with depression (correlation values ranging from R = .282 to R = .229) suggest similar themes, including dysphoria (e.g.",
"'lonely', 'sad', 'crying' -Figures 6b, 6c, 6f ) and thoughts of death (e.g.",
"'suicide' - Figure 6h ).",
"Figure 5 : The word clouds show the 1-3 grams most correlated with each group having a mental illness, with the set of control users serving as the contrastive set in both cases.",
"The size of the 1-3 gram is scaled by the correlation to binary depression label (point-biserial correlation).",
"The color indexes relative frequency, from grey (rarely used) through blue (moderately used) to red (frequently used).",
"Correlations are controlled for age and gender.",
"Language of PTSD The word cloud in Figure 5b and topic clouds in Figure 7 display the 1-3 grams and topics most correlated with PTSD, with topic correlation values ranging from R = .280 to R = .237.",
"On the whole, the language most predictive of PTSD does not map as cleanly onto the symptoms and criteria for diagnosis of PTSD as was the case with depression.",
"Across topics and 1-3 grams, the language most correlated with PTSD suggests 'depression', disease management (e.g.",
"'pain', 'pills', 'meds' - Figure 7c ) and a focus on the self (e.g.",
"'I had', 'I was', 'I am', 'I would').",
"Similarly, language is suggestive of death (e.g.",
"'suicide', 'suicidal').",
"Compared to the language of depressed users, themes within the language of users with PTSD appear to reference traumatic experiences that are required for a diagnosis of PTSD (e.g.",
"'murdered', 'died'), as well as the resultant states of fear-like psychological distress (e.g.",
"'terrified', 'anxiety').",
"PTSD and Depression From our predictive experiments and Figure 4c , we see that language-predicted age almost completely differentiates between PTSD and depressed users.",
"Consequently, we find only a few features that distinguish between the two groups when controlling for age.",
"To visualise differences between the diseases we visualize topic usage in both groups in Figure 8 .",
"This shows standardised usage in both groups for each topic.",
"As an additional factor (color), we include Figure 6 : The LDA topics most correlated with depression controlling for age and gender, with the set of control users serving as the contrastive set.",
"Word size is proportional to the probability of the word within the topics.",
"Color is for display only.",
"Figure 7 : The LDA topics most correlated with PTSD controlling for age and gender, with the set of control users serving as the contrastive set.",
"Word size is proportional to the probability of the word within the topics.",
"Color is for display only.",
"the personality trait of neuroticism.",
"This plays the most important role in separating between mentally ill users and controls.",
"The topics marked by arrows in Figure 8 are some of the topics most used by users with depression and PTSD shown above in Figures 6-7 .",
"Of the three topics, the topic shown in Figure 6h has 'suicide' as the most prevalent word.",
"This topic's use is elevated for both depression and PTSD.",
"Figure 6f shows a topic used mostly by depressed users, while Figure 7c highlights a topic used mainly by users with PTSD.",
"Related Work Prior studies have similarly examined the efficacy of utilising social media data, like Facebook and Twitter, to ascertain the presence of both depression and PTSD.",
"For instance, Coppersmith et al.",
"(2014b) analyse differences in patterns of language use.",
"They report that individuals with PTSD were significantly more likely to use third person pronouns and significantly less likely to use second person pronouns, without mentioning differences in the use of first person pronouns.",
"This is in contrast to the strong differences in first person pronoun use among depressed individuals documented in the literature ( Rude et al., 2004; Stirman and Pennebaker, 2001) , confirmed in prior Twitter studies (Coppersmith et al., 2014a; De Choudhury et al., 2013) and replicated here.",
"De Choudhury et al.",
"(2013) explore the relationships between social media postings and depressive status, finding that geographic variables can alter one's risk.",
"They show that cities for which the highest numbers of depressive Twitter users are predicted correlate with the cities with the known highest depression rates nationwide; depressive tweets follow an expected diurnal and annual rhythm (peaking at night and during winter); and women exhibit an increased risk of depression relative to men, consistent with known psychological trends.",
"These studies thus demonstrate the utility of using social media outlets to capture nuanced data about an individual's daily psychological affect to predict pathology, and suggest that geographic and demographic factors may alter the prevalence of psychological ill-being.",
"The present study is unique in its efforts to control for some of these demographic factors, such as personality and age, that demonstrably influence an individual's pattern of language use.",
"Further, these demographic characteristics are known to significantly alter patterns e.g.",
"pronoun use (Pennebaker, 2011) .",
"This highlights the utility of controlling for these factors when analysing pathological states like depression or PTSD.",
"Conclusions This study presented a qualitative analysis of mental illness language use in users who disclosed their diagnoses.",
"For users diagnosed with depression or PTSD, we have identified both symptoms and effects of their mental condition from user-generated content.",
"The majority of our results map to clinical theory, confirming the validity of our methodology and the relevance of the dataset.",
"In our experiments, we accounted for text-derived user features, such as demographics (e.g.",
"age, gender) and personality.",
"Text-derived personality alone showed high predictive performance, in one case reaching similar performance to using orders of magnitude more textual features.",
"Our study further demonstrated the potential for using social media as a means for predicting and analysing the linguistic markers of mental illnesses.",
"However, it also raises a few questions.",
"First, although apparently easily predictable, the difference between depressed and PTSD users is largely only due to predicted age.",
"Sample demographics also appear to be different than the general population, making predictive models fitted on this data to be susceptible to over-predicting certain demographics.",
"Secondly, the language associated with a selfreported diagnosis of depression and PTSD has a large overlap with the language predictive of personality.",
"This suggests that personality may be explanatory of a particular kind of behavior: posting about mental illness diagnoses online.",
"The mental illness labels thus acquired likely have personality confounds 'baked into them', stressing the need for using stronger ground truth such as given by clinicians.",
"Further, based on the scope of the applicationswhether screening or analysis of psychological risk factors -user-generated data should at minimum be temporally partitioned to encompass content shared before and after the diagnosis.",
"This allows one to separate mentions of symptoms from discussions of and consequences of their diagnosis, such as the use of medications.",
"28"
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"5",
"5.1",
"5.2",
"5.3",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Data",
"Features",
"Age, Gender and Personality",
"Affect and Intensity",
"Textual Features",
"Prediction",
"Language Analysis",
"Language of Depression",
"Language of PTSD",
"PTSD and Depression",
"Related Work",
"Conclusions"
]
} | GEM-SciDuet-train-27#paper-1028#slide-9 | Topics Depression | Topics controlled for age and gender | Topics controlled for age and gender | [] |
GEM-SciDuet-train-27#paper-1028#slide-10 | 1028 | Proceedings of the 2nd Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal The Role of Personality, Age and Gender in Tweeting about Mental Illnesses | Mental illnesses, such as depression and post traumatic stress disorder (PTSD), are highly underdiagnosed globally. Populations sharing similar demographics and personality traits are known to be more at risk than others. In this study, we characterise the language use of users disclosing their mental illness on Twitter. Language-derived personality and demographic estimates show surprisingly strong performance in distinguishing users that tweet a diagnosis of depression or PTSD from random controls, reaching an area under the receiveroperating characteristic curve -AUC -of around .8 in all our binary classification tasks. In fact, when distinguishing users disclosing depression from those disclosing PTSD, the single feature of estimated age shows nearly as strong performance (AUC = .806) as using thousands of topics (AUC = .819) or tens of thousands of n-grams (AUC = .812). We also find that differential language analyses, controlled for demographics, recover many symptoms associated with the mental illnesses in the clinical literature. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169
],
"paper_content_text": [
"Introduction Mental illnesses, such as depression and post traumatic stress disorder (PTSD) represent a large share of the global burden of disease (Üstün et al., 2004; Mathers and Loncar, 2006) , but are underdiagnosed and undertreated around the world (Prince et al., 2007) .",
"Previous research has demonstrated the important role of demographic factors in depression risk.",
"For example, while clinically-assessed depression is estimated at 6.6% in a 12-month interval for U.S. adults , the prevalence in males is 3-5%, while the prevalence is 8-10% in females (Andrade et al., 2003) .",
"Similarly, prevalence of PTSD among U.S. adults in any 12-month period is estimated at 3.5% (Kessler et al., 2005b ) -1.8% in males and 5.2% in females -yet this risk is not distributed evenly across age groups; prevalence of PTSD increases throughout the majority of the lifespan to reach a peak of 9.2% between the ages of 49-59, before dropping sharply to 2.5% past the age of 60.",
"(Kessler et al., 2005a) .",
"Large scale user-generated content provides the opportunity to extract information not only about events, but also about the person posting them.",
"Using automatic methods, a wide set of user characteristics, such as age, gender, personality, location and income have been shown to be predictable from shared social media text.",
"The same holds for mental illnesses, from users expressing symptoms of their illness (e.g.",
"low mood, focus on the self, high anxiety) to talking about effects of their illness (e.g.",
"mentioning medications and therapy) and to even self-disclosing the illness.",
"This study represents an analysis of language use in users who share their mental illness though social media, in this case depression and PTSD.",
"We advocate adjusting for important underlying demographic factors, such as age and gender, to avoid confounding by language specific to these underlying characteristics.",
"The age and gender trends from the U.S. population are present in our dataset, although imperfectly, given the biases of self-reports and social media sampling.",
"Our differential language analyses show symptoms associated with these illnesses congruent with existing clinical theory and consequences of diagnoses.",
"In addition to age and gender, we focus on the important role of inferred personality in predicting 21 mental illness.",
"We show that a model which uses only the text-predicted user level 'Big Five' personality dimensions plus age and gender perform with high accuracy, comparable to methods that use standard dictionaries of psychology as features.",
"Users who self-report a diagnosis appear more neurotic and more introverted when compared to average users.",
"Data We use a dataset of Twitter users reported to suffer from a mental illness, specifically depression and post traumatic stress disorder (PTSD).",
"This dataset was first introduced in (Coppersmith et al., 2014a) .",
"The self-reports are collected by searching a large Twitter archive for disclosures using a regular expression (e.g.",
"'I have been diagnosed with depression').",
"Candidate users were filtered manually and then all their most recent tweets have been continuously crawled using the Twitter Search API.",
"The selfdisclosure messages were excluded from the dataset and from the estimation of user inferred demographics and personality scores.",
"The control users were selected at random from Twitter.",
"In total there are 370 users diagnosed only with PTSD, 483 only with depression and 1104 control users.",
"On average, each user has 3400.8 messages.",
"As Coppersmith et al.",
"(2014b) acknowledge, this method of collection is susceptible to multiple biases, but represents a simple way to build a large dataset of users and their textual information.",
"Features We use the Twitter posts of a user to infer several user traits which we expect to be relevant to mental illnesses based on standard clinical criteria (American Psychiatric Association, 2013).",
"Recently, automatic user profiling methods have used on usergenerated text and complementary features in order to predict different user traits such as: age (Nguyen et al., 2011) , gender (Sap et al., 2014) , location (Cheng et al., 2010) , impact (Lampos et al., 2014) , political preference (Volkova et al., 2014) , temporal orientation or personality (Schwartz et al., 2013) .",
"Age, Gender and Personality We use the methods developed in (Schwartz et al., 2013) to assign each user scores for age, gender and personality from the popular five factor model of personality -'Big Five ' -(McCrae and John, 1992) , which consists of five dimensions: extraversion, agreeableness, conscientiousness, neuroticism and openness to experience.",
"The model was trained on a large sample of around 70,000 Facebook users who have taken Big Five personality tests and shared their posts using a model using 1-3 grams and topics as features Schwartz et al., 2013) .",
"This model achieves R > .3 predictive performance for all five traits.",
"This dataset is also used to obtain age and gender adjusted personality and topic distributions.",
"Affect and Intensity Emotions play an important role in the diagnosis of mental illness (American Psychiatric Association, 2013) .",
"We aim to capture the expression of users' emotions through their generated posts.",
"We characterize expressions along the dimensions of affect (from positive to negative) and intensity (from low to high), which correspond to the two primary axes of the circumplex model, a well-established system for describing emotional states (Posner et al., 2005) .",
"Machine learning approaches perform significantly better at quantifying emotion/sentiment from text compared to lexicon-based methods (Pang and Lee, 2008) .",
"Emotions are expressed at message-level.",
"Consequently, we trained a text classification model on 3,000 Facebook posts labeled by affect and intensity using unigrams as features.",
"We applied this model on each user's posts and aggregated over them to obtain a user score for both dimensions.",
"Textual Features For our qualitative text analysis we extract textual features from all of a user's Twitter posts.",
"Traditional psychological studies use a closed-vocabulary approach to modelling text.",
"The most popular method is based on Linguistic Inquiry and Word Count (LIWC) .",
"In LIWC, psychological theory was used to build 64 different categories.",
"These include different parts-of-speech, topical categories and emotions.",
"Each user is thereby represented as a distribution over these categories.",
"We also use all frequent 1-3 grams (used by more than 10% of users in our dataset), where we use pointwise mutual information (PMI) to filter infrequent 2-3 grams.",
"For a better qualitative assessment and to reduce risk of overfitting, we use a set of topics as a form of dimensionality reduction.",
"We use the 2,000 clusters introduced in (Schwartz et al., 2013) obtained by applying Latent Dirichlet Allocation (Blei et al., 2003) , the most popular topic model, to a large set of Facebook posts.",
"Prediction In this section we present an analysis of the predictive power of inferred user-level features.",
"We use the methods introduced in Section 3 to predict nine user level scores: age, gender, affect, intensity and the Big Five personality traits.",
"The three populations in our dataset are used to formulate three binary classification problems in order to analyse specific pairwise group peculiarities.",
"Users having both PTSD and depression are held-out when classifying between these two classes.",
"To assess the power of our text-derived features, we use as features broader textual features such as the LIWC categories, the LDA inferred topics and frequent 1-3 grams.",
"We train binary logistic regression classifiers (Pedregosa et al., 2011) with Elastic Net regularisation (Zou and Hastie, 2005) .",
"In Table 1 we report the performance using 10-fold cross-validation.",
"Performance is measured using ROC area under the curve (ROC AUC), an adequate measure when the classes are imbalanced.",
"A more thorough study of predictive performance for identifying PTSD and depressed users is presented in (Preoţiuc-Pietro et al., 2015) .",
"Our results show the following: • Age alone improves over chance and is highly predictive when classifying PTSD users.",
"To visualise the effect of age, Figure 1 shows the probability density function in our three populations.",
"This highlights that PTSD users are consistently predicted older than both controls and depressed users.",
"This is in line with findings from the National Comorbidity Survey and replications (Kessler et al., 2005a ; Kessler et al., Figure 1 : Age density functions for each group.",
"• Gender is only weakly predictive of any mental illness, although significantly above chance in depressed vs. controls (p < .01, DeLong test 1 ).",
"Interestingly, in this task age and gender combined improve significantly above each individual prediction, illustrating they contain complementary information.",
"Consequently, at least when analysing depression, gender should be accounted for in addition to age.",
"• Personality alone obtains very good predictive accuracies, reaching over .8 ROC AUC for classifying depressed vs. PTSD.",
"In general, personality features alone perform with strong predictive accuracy, within .1 of >5000 unigram features or 2000 topics.",
"Adding age and gender information further improves predictive power (C-P p < .01, D-P p < .01, DeLong test) when PTSD is one of the compared groups.",
"In Figure 2 we show the mean personality scores across the three groups.",
"In this dataset, PTSD users score highest on average in openness with depressed users scoring lowest.",
"However, neuroticism is the largest separator between mentally ill users and the controls, with depressed having slightly higher levels of neuroticism than PTSD.",
"Neuroticism alone has an ROC AUC of .732 in prediction depression vs. control and .674 in predicting PTSD vs. control.",
"Controls score higher on extraversion, a trait related to the frequency and intensity of positive emotions (Smillie et al., 2012) .",
"Controlling for age (Figure 2b ) significantly reduces the initial association between PTSD and higher conscientiousness, because PTSD users are likely to be older, and conscientiousness tends to increase with age (Soto et al., 2011) .",
"After controlling, depressed users score lowest on conscientiousness, while PTSD and controls are close to each other.",
"• Average affect and intensity achieve modest predictive performance, although significant (C-D p < .001, D-P p < .001, DeLong test) when one of the compared groups are depressed.",
"We use the two features to map users to the emotion circumplex in Figure 3 .",
"On average, control users expressed both higher intensity and higher (i.e.",
"more positive) affect, while depressed users were lowest on both.",
"This is consistent with the lowered (i.e.",
"more negative) affect typically seen in both PTSD and depressed patients, and the increased intensity/arousal among PTSD users may correspond to more frequent expressions of anxiety, which is characterized by high arousal and lower/negative affect (American Psychiatric Association, 2013).",
"• Textual features obtain high predictive performance.",
"Out of these, LIWC performs the worst, while the topics, unigrams and 1-3 grams have similarly high performance.",
"In addition to ROC AUC scores, we present ROC curves for all three binary prediction tasks in Figures 4a, 4b and 4c .",
"ROC curves are specifically useful for medical practitioners because the classification threshold can be adjusted to choose an applicationappropriate level of false positives.",
"For comparison, we display methods using only age and gender; age, gender and personality combined, as well as LIWC and the LDA topics.",
"For classifying depressed users from controls, a true positive rate of ∼ 0.6 can be achieved at a false positive rate of ∼ 0.2 using personality, age and gender alone, with an increase to up to ∼ 0.7 when PTSD users are one of the groups.",
"When classifying PTSD users, age is the most important factor.",
"Separating between depressed and PTSD is almost exclusively a factor of age.",
"This suggests that a application in a real life scenario will likely overpredict older users to have PTSD.",
"Language Analysis The very high predictive power of the user-level features and textual features motivates us to analyse the linguistic features associated with each group, taking into account age and gender.",
"We study differences in language between groups using differential language analysis -DLA (Schwartz et al., 2013) .",
"This method aims to find all the most discriminative features between two groups by correlating each individual feature (1-3 gram or topic) to the class label.",
"In our case, age and gender are included as covariates in order to control for the effect they may have on the outcome.",
"Since a large number of features are explored, we consider coefficients significant if they meet a Bonferroni-corrected two-tailed p-value of less than 0.001.",
"Language of Depression The word cloud in Figure 5a displays the 1-3 grams that most distinguish the depressed users from the set of control users.",
"Many features show face validity (e.g.",
"'depressed'), but also appear to represent a number of the cognitive and emotional processes implicated in depression in the literature (American Psychiatric Association, 2013).",
"1-3 grams seem to disclose information relating to illness and illness management (e.g.",
"'depressed', 'illness', 'meds', 'pills', 'therapy').",
"In some of the most strongly correlated features we also observe an increased focus on the self (e.g.",
"'I', 'I am', 'I have', 'I haven't', 'I was', 'myself') which has been found to accompany depression in many studies and often accompanies states of psychological distress (Rude et al., 2004; Stirman and Pennebaker, 2001; Bucci and Freedman, 1981) .",
"Depression classically relies on the presence of two sets of core symptoms: sustained periods of low mood (dysphoria) and low interest (anhedonia) (American Psychiatric Association, 2013) .",
"Phrases such as 'cry' and 'crying' suggest low mood, while 'anymore' and 'I used to' may suggest a discontinuation of activities.",
"Suicidal ideations or more general thoughts of death and dying are symptoms used in the diagnosis of depression, and even though they are relatively rarely mentioned (grey color), are identified in the differential language analysis (e.g.",
"'suicide', 'to die').",
"Beyond what is generally thought of as the key symptoms of depression discussed above, the differential language analysis also suggests that anger and interpersonal hostility ('fucking') feature significantly in the language use of depressed users.",
"The 10 topics most associated with depression (correlation values ranging from R = .282 to R = .229) suggest similar themes, including dysphoria (e.g.",
"'lonely', 'sad', 'crying' -Figures 6b, 6c, 6f ) and thoughts of death (e.g.",
"'suicide' - Figure 6h ).",
"Figure 5 : The word clouds show the 1-3 grams most correlated with each group having a mental illness, with the set of control users serving as the contrastive set in both cases.",
"The size of the 1-3 gram is scaled by the correlation to binary depression label (point-biserial correlation).",
"The color indexes relative frequency, from grey (rarely used) through blue (moderately used) to red (frequently used).",
"Correlations are controlled for age and gender.",
"Language of PTSD The word cloud in Figure 5b and topic clouds in Figure 7 display the 1-3 grams and topics most correlated with PTSD, with topic correlation values ranging from R = .280 to R = .237.",
"On the whole, the language most predictive of PTSD does not map as cleanly onto the symptoms and criteria for diagnosis of PTSD as was the case with depression.",
"Across topics and 1-3 grams, the language most correlated with PTSD suggests 'depression', disease management (e.g.",
"'pain', 'pills', 'meds' - Figure 7c ) and a focus on the self (e.g.",
"'I had', 'I was', 'I am', 'I would').",
"Similarly, language is suggestive of death (e.g.",
"'suicide', 'suicidal').",
"Compared to the language of depressed users, themes within the language of users with PTSD appear to reference traumatic experiences that are required for a diagnosis of PTSD (e.g.",
"'murdered', 'died'), as well as the resultant states of fear-like psychological distress (e.g.",
"'terrified', 'anxiety').",
"PTSD and Depression From our predictive experiments and Figure 4c , we see that language-predicted age almost completely differentiates between PTSD and depressed users.",
"Consequently, we find only a few features that distinguish between the two groups when controlling for age.",
"To visualise differences between the diseases we visualize topic usage in both groups in Figure 8 .",
"This shows standardised usage in both groups for each topic.",
"As an additional factor (color), we include Figure 6 : The LDA topics most correlated with depression controlling for age and gender, with the set of control users serving as the contrastive set.",
"Word size is proportional to the probability of the word within the topics.",
"Color is for display only.",
"Figure 7 : The LDA topics most correlated with PTSD controlling for age and gender, with the set of control users serving as the contrastive set.",
"Word size is proportional to the probability of the word within the topics.",
"Color is for display only.",
"the personality trait of neuroticism.",
"This plays the most important role in separating between mentally ill users and controls.",
"The topics marked by arrows in Figure 8 are some of the topics most used by users with depression and PTSD shown above in Figures 6-7 .",
"Of the three topics, the topic shown in Figure 6h has 'suicide' as the most prevalent word.",
"This topic's use is elevated for both depression and PTSD.",
"Figure 6f shows a topic used mostly by depressed users, while Figure 7c highlights a topic used mainly by users with PTSD.",
"Related Work Prior studies have similarly examined the efficacy of utilising social media data, like Facebook and Twitter, to ascertain the presence of both depression and PTSD.",
"For instance, Coppersmith et al.",
"(2014b) analyse differences in patterns of language use.",
"They report that individuals with PTSD were significantly more likely to use third person pronouns and significantly less likely to use second person pronouns, without mentioning differences in the use of first person pronouns.",
"This is in contrast to the strong differences in first person pronoun use among depressed individuals documented in the literature ( Rude et al., 2004; Stirman and Pennebaker, 2001) , confirmed in prior Twitter studies (Coppersmith et al., 2014a; De Choudhury et al., 2013) and replicated here.",
"De Choudhury et al.",
"(2013) explore the relationships between social media postings and depressive status, finding that geographic variables can alter one's risk.",
"They show that cities for which the highest numbers of depressive Twitter users are predicted correlate with the cities with the known highest depression rates nationwide; depressive tweets follow an expected diurnal and annual rhythm (peaking at night and during winter); and women exhibit an increased risk of depression relative to men, consistent with known psychological trends.",
"These studies thus demonstrate the utility of using social media outlets to capture nuanced data about an individual's daily psychological affect to predict pathology, and suggest that geographic and demographic factors may alter the prevalence of psychological ill-being.",
"The present study is unique in its efforts to control for some of these demographic factors, such as personality and age, that demonstrably influence an individual's pattern of language use.",
"Further, these demographic characteristics are known to significantly alter patterns e.g.",
"pronoun use (Pennebaker, 2011) .",
"This highlights the utility of controlling for these factors when analysing pathological states like depression or PTSD.",
"Conclusions This study presented a qualitative analysis of mental illness language use in users who disclosed their diagnoses.",
"For users diagnosed with depression or PTSD, we have identified both symptoms and effects of their mental condition from user-generated content.",
"The majority of our results map to clinical theory, confirming the validity of our methodology and the relevance of the dataset.",
"In our experiments, we accounted for text-derived user features, such as demographics (e.g.",
"age, gender) and personality.",
"Text-derived personality alone showed high predictive performance, in one case reaching similar performance to using orders of magnitude more textual features.",
"Our study further demonstrated the potential for using social media as a means for predicting and analysing the linguistic markers of mental illnesses.",
"However, it also raises a few questions.",
"First, although apparently easily predictable, the difference between depressed and PTSD users is largely only due to predicted age.",
"Sample demographics also appear to be different than the general population, making predictive models fitted on this data to be susceptible to over-predicting certain demographics.",
"Secondly, the language associated with a selfreported diagnosis of depression and PTSD has a large overlap with the language predictive of personality.",
"This suggests that personality may be explanatory of a particular kind of behavior: posting about mental illness diagnoses online.",
"The mental illness labels thus acquired likely have personality confounds 'baked into them', stressing the need for using stronger ground truth such as given by clinicians.",
"Further, based on the scope of the applicationswhether screening or analysis of psychological risk factors -user-generated data should at minimum be temporally partitioned to encompass content shared before and after the diagnosis.",
"This allows one to separate mentions of symptoms from discussions of and consequences of their diagnosis, such as the use of medications.",
"28"
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"5",
"5.1",
"5.2",
"5.3",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Data",
"Features",
"Age, Gender and Personality",
"Affect and Intensity",
"Textual Features",
"Prediction",
"Language Analysis",
"Language of Depression",
"Language of PTSD",
"PTSD and Depression",
"Related Work",
"Conclusions"
]
} | GEM-SciDuet-train-27#paper-1028#slide-10 | Topics PTSD | Topics controlled for age and gender | Topics controlled for age and gender | [] |
GEM-SciDuet-train-27#paper-1028#slide-12 | 1028 | Proceedings of the 2nd Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal The Role of Personality, Age and Gender in Tweeting about Mental Illnesses | Mental illnesses, such as depression and post traumatic stress disorder (PTSD), are highly underdiagnosed globally. Populations sharing similar demographics and personality traits are known to be more at risk than others. In this study, we characterise the language use of users disclosing their mental illness on Twitter. Language-derived personality and demographic estimates show surprisingly strong performance in distinguishing users that tweet a diagnosis of depression or PTSD from random controls, reaching an area under the receiveroperating characteristic curve -AUC -of around .8 in all our binary classification tasks. In fact, when distinguishing users disclosing depression from those disclosing PTSD, the single feature of estimated age shows nearly as strong performance (AUC = .806) as using thousands of topics (AUC = .819) or tens of thousands of n-grams (AUC = .812). We also find that differential language analyses, controlled for demographics, recover many symptoms associated with the mental illnesses in the clinical literature. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169
],
"paper_content_text": [
"Introduction Mental illnesses, such as depression and post traumatic stress disorder (PTSD) represent a large share of the global burden of disease (Üstün et al., 2004; Mathers and Loncar, 2006) , but are underdiagnosed and undertreated around the world (Prince et al., 2007) .",
"Previous research has demonstrated the important role of demographic factors in depression risk.",
"For example, while clinically-assessed depression is estimated at 6.6% in a 12-month interval for U.S. adults , the prevalence in males is 3-5%, while the prevalence is 8-10% in females (Andrade et al., 2003) .",
"Similarly, prevalence of PTSD among U.S. adults in any 12-month period is estimated at 3.5% (Kessler et al., 2005b ) -1.8% in males and 5.2% in females -yet this risk is not distributed evenly across age groups; prevalence of PTSD increases throughout the majority of the lifespan to reach a peak of 9.2% between the ages of 49-59, before dropping sharply to 2.5% past the age of 60.",
"(Kessler et al., 2005a) .",
"Large scale user-generated content provides the opportunity to extract information not only about events, but also about the person posting them.",
"Using automatic methods, a wide set of user characteristics, such as age, gender, personality, location and income have been shown to be predictable from shared social media text.",
"The same holds for mental illnesses, from users expressing symptoms of their illness (e.g.",
"low mood, focus on the self, high anxiety) to talking about effects of their illness (e.g.",
"mentioning medications and therapy) and to even self-disclosing the illness.",
"This study represents an analysis of language use in users who share their mental illness though social media, in this case depression and PTSD.",
"We advocate adjusting for important underlying demographic factors, such as age and gender, to avoid confounding by language specific to these underlying characteristics.",
"The age and gender trends from the U.S. population are present in our dataset, although imperfectly, given the biases of self-reports and social media sampling.",
"Our differential language analyses show symptoms associated with these illnesses congruent with existing clinical theory and consequences of diagnoses.",
"In addition to age and gender, we focus on the important role of inferred personality in predicting 21 mental illness.",
"We show that a model which uses only the text-predicted user level 'Big Five' personality dimensions plus age and gender perform with high accuracy, comparable to methods that use standard dictionaries of psychology as features.",
"Users who self-report a diagnosis appear more neurotic and more introverted when compared to average users.",
"Data We use a dataset of Twitter users reported to suffer from a mental illness, specifically depression and post traumatic stress disorder (PTSD).",
"This dataset was first introduced in (Coppersmith et al., 2014a) .",
"The self-reports are collected by searching a large Twitter archive for disclosures using a regular expression (e.g.",
"'I have been diagnosed with depression').",
"Candidate users were filtered manually and then all their most recent tweets have been continuously crawled using the Twitter Search API.",
"The selfdisclosure messages were excluded from the dataset and from the estimation of user inferred demographics and personality scores.",
"The control users were selected at random from Twitter.",
"In total there are 370 users diagnosed only with PTSD, 483 only with depression and 1104 control users.",
"On average, each user has 3400.8 messages.",
"As Coppersmith et al.",
"(2014b) acknowledge, this method of collection is susceptible to multiple biases, but represents a simple way to build a large dataset of users and their textual information.",
"Features We use the Twitter posts of a user to infer several user traits which we expect to be relevant to mental illnesses based on standard clinical criteria (American Psychiatric Association, 2013).",
"Recently, automatic user profiling methods have used on usergenerated text and complementary features in order to predict different user traits such as: age (Nguyen et al., 2011) , gender (Sap et al., 2014) , location (Cheng et al., 2010) , impact (Lampos et al., 2014) , political preference (Volkova et al., 2014) , temporal orientation or personality (Schwartz et al., 2013) .",
"Age, Gender and Personality We use the methods developed in (Schwartz et al., 2013) to assign each user scores for age, gender and personality from the popular five factor model of personality -'Big Five ' -(McCrae and John, 1992) , which consists of five dimensions: extraversion, agreeableness, conscientiousness, neuroticism and openness to experience.",
"The model was trained on a large sample of around 70,000 Facebook users who have taken Big Five personality tests and shared their posts using a model using 1-3 grams and topics as features Schwartz et al., 2013) .",
"This model achieves R > .3 predictive performance for all five traits.",
"This dataset is also used to obtain age and gender adjusted personality and topic distributions.",
"Affect and Intensity Emotions play an important role in the diagnosis of mental illness (American Psychiatric Association, 2013) .",
"We aim to capture the expression of users' emotions through their generated posts.",
"We characterize expressions along the dimensions of affect (from positive to negative) and intensity (from low to high), which correspond to the two primary axes of the circumplex model, a well-established system for describing emotional states (Posner et al., 2005) .",
"Machine learning approaches perform significantly better at quantifying emotion/sentiment from text compared to lexicon-based methods (Pang and Lee, 2008) .",
"Emotions are expressed at message-level.",
"Consequently, we trained a text classification model on 3,000 Facebook posts labeled by affect and intensity using unigrams as features.",
"We applied this model on each user's posts and aggregated over them to obtain a user score for both dimensions.",
"Textual Features For our qualitative text analysis we extract textual features from all of a user's Twitter posts.",
"Traditional psychological studies use a closed-vocabulary approach to modelling text.",
"The most popular method is based on Linguistic Inquiry and Word Count (LIWC) .",
"In LIWC, psychological theory was used to build 64 different categories.",
"These include different parts-of-speech, topical categories and emotions.",
"Each user is thereby represented as a distribution over these categories.",
"We also use all frequent 1-3 grams (used by more than 10% of users in our dataset), where we use pointwise mutual information (PMI) to filter infrequent 2-3 grams.",
"For a better qualitative assessment and to reduce risk of overfitting, we use a set of topics as a form of dimensionality reduction.",
"We use the 2,000 clusters introduced in (Schwartz et al., 2013) obtained by applying Latent Dirichlet Allocation (Blei et al., 2003) , the most popular topic model, to a large set of Facebook posts.",
"Prediction In this section we present an analysis of the predictive power of inferred user-level features.",
"We use the methods introduced in Section 3 to predict nine user level scores: age, gender, affect, intensity and the Big Five personality traits.",
"The three populations in our dataset are used to formulate three binary classification problems in order to analyse specific pairwise group peculiarities.",
"Users having both PTSD and depression are held-out when classifying between these two classes.",
"To assess the power of our text-derived features, we use as features broader textual features such as the LIWC categories, the LDA inferred topics and frequent 1-3 grams.",
"We train binary logistic regression classifiers (Pedregosa et al., 2011) with Elastic Net regularisation (Zou and Hastie, 2005) .",
"In Table 1 we report the performance using 10-fold cross-validation.",
"Performance is measured using ROC area under the curve (ROC AUC), an adequate measure when the classes are imbalanced.",
"A more thorough study of predictive performance for identifying PTSD and depressed users is presented in (Preoţiuc-Pietro et al., 2015) .",
"Our results show the following: • Age alone improves over chance and is highly predictive when classifying PTSD users.",
"To visualise the effect of age, Figure 1 shows the probability density function in our three populations.",
"This highlights that PTSD users are consistently predicted older than both controls and depressed users.",
"This is in line with findings from the National Comorbidity Survey and replications (Kessler et al., 2005a ; Kessler et al., Figure 1 : Age density functions for each group.",
"• Gender is only weakly predictive of any mental illness, although significantly above chance in depressed vs. controls (p < .01, DeLong test 1 ).",
"Interestingly, in this task age and gender combined improve significantly above each individual prediction, illustrating they contain complementary information.",
"Consequently, at least when analysing depression, gender should be accounted for in addition to age.",
"• Personality alone obtains very good predictive accuracies, reaching over .8 ROC AUC for classifying depressed vs. PTSD.",
"In general, personality features alone perform with strong predictive accuracy, within .1 of >5000 unigram features or 2000 topics.",
"Adding age and gender information further improves predictive power (C-P p < .01, D-P p < .01, DeLong test) when PTSD is one of the compared groups.",
"In Figure 2 we show the mean personality scores across the three groups.",
"In this dataset, PTSD users score highest on average in openness with depressed users scoring lowest.",
"However, neuroticism is the largest separator between mentally ill users and the controls, with depressed having slightly higher levels of neuroticism than PTSD.",
"Neuroticism alone has an ROC AUC of .732 in prediction depression vs. control and .674 in predicting PTSD vs. control.",
"Controls score higher on extraversion, a trait related to the frequency and intensity of positive emotions (Smillie et al., 2012) .",
"Controlling for age (Figure 2b ) significantly reduces the initial association between PTSD and higher conscientiousness, because PTSD users are likely to be older, and conscientiousness tends to increase with age (Soto et al., 2011) .",
"After controlling, depressed users score lowest on conscientiousness, while PTSD and controls are close to each other.",
"• Average affect and intensity achieve modest predictive performance, although significant (C-D p < .001, D-P p < .001, DeLong test) when one of the compared groups are depressed.",
"We use the two features to map users to the emotion circumplex in Figure 3 .",
"On average, control users expressed both higher intensity and higher (i.e.",
"more positive) affect, while depressed users were lowest on both.",
"This is consistent with the lowered (i.e.",
"more negative) affect typically seen in both PTSD and depressed patients, and the increased intensity/arousal among PTSD users may correspond to more frequent expressions of anxiety, which is characterized by high arousal and lower/negative affect (American Psychiatric Association, 2013).",
"• Textual features obtain high predictive performance.",
"Out of these, LIWC performs the worst, while the topics, unigrams and 1-3 grams have similarly high performance.",
"In addition to ROC AUC scores, we present ROC curves for all three binary prediction tasks in Figures 4a, 4b and 4c .",
"ROC curves are specifically useful for medical practitioners because the classification threshold can be adjusted to choose an applicationappropriate level of false positives.",
"For comparison, we display methods using only age and gender; age, gender and personality combined, as well as LIWC and the LDA topics.",
"For classifying depressed users from controls, a true positive rate of ∼ 0.6 can be achieved at a false positive rate of ∼ 0.2 using personality, age and gender alone, with an increase to up to ∼ 0.7 when PTSD users are one of the groups.",
"When classifying PTSD users, age is the most important factor.",
"Separating between depressed and PTSD is almost exclusively a factor of age.",
"This suggests that a application in a real life scenario will likely overpredict older users to have PTSD.",
"Language Analysis The very high predictive power of the user-level features and textual features motivates us to analyse the linguistic features associated with each group, taking into account age and gender.",
"We study differences in language between groups using differential language analysis -DLA (Schwartz et al., 2013) .",
"This method aims to find all the most discriminative features between two groups by correlating each individual feature (1-3 gram or topic) to the class label.",
"In our case, age and gender are included as covariates in order to control for the effect they may have on the outcome.",
"Since a large number of features are explored, we consider coefficients significant if they meet a Bonferroni-corrected two-tailed p-value of less than 0.001.",
"Language of Depression The word cloud in Figure 5a displays the 1-3 grams that most distinguish the depressed users from the set of control users.",
"Many features show face validity (e.g.",
"'depressed'), but also appear to represent a number of the cognitive and emotional processes implicated in depression in the literature (American Psychiatric Association, 2013).",
"1-3 grams seem to disclose information relating to illness and illness management (e.g.",
"'depressed', 'illness', 'meds', 'pills', 'therapy').",
"In some of the most strongly correlated features we also observe an increased focus on the self (e.g.",
"'I', 'I am', 'I have', 'I haven't', 'I was', 'myself') which has been found to accompany depression in many studies and often accompanies states of psychological distress (Rude et al., 2004; Stirman and Pennebaker, 2001; Bucci and Freedman, 1981) .",
"Depression classically relies on the presence of two sets of core symptoms: sustained periods of low mood (dysphoria) and low interest (anhedonia) (American Psychiatric Association, 2013) .",
"Phrases such as 'cry' and 'crying' suggest low mood, while 'anymore' and 'I used to' may suggest a discontinuation of activities.",
"Suicidal ideations or more general thoughts of death and dying are symptoms used in the diagnosis of depression, and even though they are relatively rarely mentioned (grey color), are identified in the differential language analysis (e.g.",
"'suicide', 'to die').",
"Beyond what is generally thought of as the key symptoms of depression discussed above, the differential language analysis also suggests that anger and interpersonal hostility ('fucking') feature significantly in the language use of depressed users.",
"The 10 topics most associated with depression (correlation values ranging from R = .282 to R = .229) suggest similar themes, including dysphoria (e.g.",
"'lonely', 'sad', 'crying' -Figures 6b, 6c, 6f ) and thoughts of death (e.g.",
"'suicide' - Figure 6h ).",
"Figure 5 : The word clouds show the 1-3 grams most correlated with each group having a mental illness, with the set of control users serving as the contrastive set in both cases.",
"The size of the 1-3 gram is scaled by the correlation to binary depression label (point-biserial correlation).",
"The color indexes relative frequency, from grey (rarely used) through blue (moderately used) to red (frequently used).",
"Correlations are controlled for age and gender.",
"Language of PTSD The word cloud in Figure 5b and topic clouds in Figure 7 display the 1-3 grams and topics most correlated with PTSD, with topic correlation values ranging from R = .280 to R = .237.",
"On the whole, the language most predictive of PTSD does not map as cleanly onto the symptoms and criteria for diagnosis of PTSD as was the case with depression.",
"Across topics and 1-3 grams, the language most correlated with PTSD suggests 'depression', disease management (e.g.",
"'pain', 'pills', 'meds' - Figure 7c ) and a focus on the self (e.g.",
"'I had', 'I was', 'I am', 'I would').",
"Similarly, language is suggestive of death (e.g.",
"'suicide', 'suicidal').",
"Compared to the language of depressed users, themes within the language of users with PTSD appear to reference traumatic experiences that are required for a diagnosis of PTSD (e.g.",
"'murdered', 'died'), as well as the resultant states of fear-like psychological distress (e.g.",
"'terrified', 'anxiety').",
"PTSD and Depression From our predictive experiments and Figure 4c , we see that language-predicted age almost completely differentiates between PTSD and depressed users.",
"Consequently, we find only a few features that distinguish between the two groups when controlling for age.",
"To visualise differences between the diseases we visualize topic usage in both groups in Figure 8 .",
"This shows standardised usage in both groups for each topic.",
"As an additional factor (color), we include Figure 6 : The LDA topics most correlated with depression controlling for age and gender, with the set of control users serving as the contrastive set.",
"Word size is proportional to the probability of the word within the topics.",
"Color is for display only.",
"Figure 7 : The LDA topics most correlated with PTSD controlling for age and gender, with the set of control users serving as the contrastive set.",
"Word size is proportional to the probability of the word within the topics.",
"Color is for display only.",
"the personality trait of neuroticism.",
"This plays the most important role in separating between mentally ill users and controls.",
"The topics marked by arrows in Figure 8 are some of the topics most used by users with depression and PTSD shown above in Figures 6-7 .",
"Of the three topics, the topic shown in Figure 6h has 'suicide' as the most prevalent word.",
"This topic's use is elevated for both depression and PTSD.",
"Figure 6f shows a topic used mostly by depressed users, while Figure 7c highlights a topic used mainly by users with PTSD.",
"Related Work Prior studies have similarly examined the efficacy of utilising social media data, like Facebook and Twitter, to ascertain the presence of both depression and PTSD.",
"For instance, Coppersmith et al.",
"(2014b) analyse differences in patterns of language use.",
"They report that individuals with PTSD were significantly more likely to use third person pronouns and significantly less likely to use second person pronouns, without mentioning differences in the use of first person pronouns.",
"This is in contrast to the strong differences in first person pronoun use among depressed individuals documented in the literature ( Rude et al., 2004; Stirman and Pennebaker, 2001) , confirmed in prior Twitter studies (Coppersmith et al., 2014a; De Choudhury et al., 2013) and replicated here.",
"De Choudhury et al.",
"(2013) explore the relationships between social media postings and depressive status, finding that geographic variables can alter one's risk.",
"They show that cities for which the highest numbers of depressive Twitter users are predicted correlate with the cities with the known highest depression rates nationwide; depressive tweets follow an expected diurnal and annual rhythm (peaking at night and during winter); and women exhibit an increased risk of depression relative to men, consistent with known psychological trends.",
"These studies thus demonstrate the utility of using social media outlets to capture nuanced data about an individual's daily psychological affect to predict pathology, and suggest that geographic and demographic factors may alter the prevalence of psychological ill-being.",
"The present study is unique in its efforts to control for some of these demographic factors, such as personality and age, that demonstrably influence an individual's pattern of language use.",
"Further, these demographic characteristics are known to significantly alter patterns e.g.",
"pronoun use (Pennebaker, 2011) .",
"This highlights the utility of controlling for these factors when analysing pathological states like depression or PTSD.",
"Conclusions This study presented a qualitative analysis of mental illness language use in users who disclosed their diagnoses.",
"For users diagnosed with depression or PTSD, we have identified both symptoms and effects of their mental condition from user-generated content.",
"The majority of our results map to clinical theory, confirming the validity of our methodology and the relevance of the dataset.",
"In our experiments, we accounted for text-derived user features, such as demographics (e.g.",
"age, gender) and personality.",
"Text-derived personality alone showed high predictive performance, in one case reaching similar performance to using orders of magnitude more textual features.",
"Our study further demonstrated the potential for using social media as a means for predicting and analysing the linguistic markers of mental illnesses.",
"However, it also raises a few questions.",
"First, although apparently easily predictable, the difference between depressed and PTSD users is largely only due to predicted age.",
"Sample demographics also appear to be different than the general population, making predictive models fitted on this data to be susceptible to over-predicting certain demographics.",
"Secondly, the language associated with a selfreported diagnosis of depression and PTSD has a large overlap with the language predictive of personality.",
"This suggests that personality may be explanatory of a particular kind of behavior: posting about mental illness diagnoses online.",
"The mental illness labels thus acquired likely have personality confounds 'baked into them', stressing the need for using stronger ground truth such as given by clinicians.",
"Further, based on the scope of the applicationswhether screening or analysis of psychological risk factors -user-generated data should at minimum be temporally partitioned to encompass content shared before and after the diagnosis.",
"This allows one to separate mentions of symptoms from discussions of and consequences of their diagnosis, such as the use of medications.",
"28"
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"5",
"5.1",
"5.2",
"5.3",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Data",
"Features",
"Age, Gender and Personality",
"Affect and Intensity",
"Textual Features",
"Prediction",
"Language Analysis",
"Language of Depression",
"Language of PTSD",
"PTSD and Depression",
"Related Work",
"Conclusions"
]
} | GEM-SciDuet-train-27#paper-1028#slide-12 | 1 3 grams | @ Depressed vs. Control = PTSDvs.Control Depressed vs. PTSD
Penn | World Well-Being Project
a= Gender Age ae | @ Depressed vs. Control = PTSDvs.Control Depressed vs. PTSD
Penn | World Well-Being Project
a= Gender Age ae | [] |
GEM-SciDuet-train-27#paper-1028#slide-13 | 1028 | Proceedings of the 2nd Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal The Role of Personality, Age and Gender in Tweeting about Mental Illnesses | Mental illnesses, such as depression and post traumatic stress disorder (PTSD), are highly underdiagnosed globally. Populations sharing similar demographics and personality traits are known to be more at risk than others. In this study, we characterise the language use of users disclosing their mental illness on Twitter. Language-derived personality and demographic estimates show surprisingly strong performance in distinguishing users that tweet a diagnosis of depression or PTSD from random controls, reaching an area under the receiveroperating characteristic curve -AUC -of around .8 in all our binary classification tasks. In fact, when distinguishing users disclosing depression from those disclosing PTSD, the single feature of estimated age shows nearly as strong performance (AUC = .806) as using thousands of topics (AUC = .819) or tens of thousands of n-grams (AUC = .812). We also find that differential language analyses, controlled for demographics, recover many symptoms associated with the mental illnesses in the clinical literature. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169
],
"paper_content_text": [
"Introduction Mental illnesses, such as depression and post traumatic stress disorder (PTSD) represent a large share of the global burden of disease (Üstün et al., 2004; Mathers and Loncar, 2006) , but are underdiagnosed and undertreated around the world (Prince et al., 2007) .",
"Previous research has demonstrated the important role of demographic factors in depression risk.",
"For example, while clinically-assessed depression is estimated at 6.6% in a 12-month interval for U.S. adults , the prevalence in males is 3-5%, while the prevalence is 8-10% in females (Andrade et al., 2003) .",
"Similarly, prevalence of PTSD among U.S. adults in any 12-month period is estimated at 3.5% (Kessler et al., 2005b ) -1.8% in males and 5.2% in females -yet this risk is not distributed evenly across age groups; prevalence of PTSD increases throughout the majority of the lifespan to reach a peak of 9.2% between the ages of 49-59, before dropping sharply to 2.5% past the age of 60.",
"(Kessler et al., 2005a) .",
"Large scale user-generated content provides the opportunity to extract information not only about events, but also about the person posting them.",
"Using automatic methods, a wide set of user characteristics, such as age, gender, personality, location and income have been shown to be predictable from shared social media text.",
"The same holds for mental illnesses, from users expressing symptoms of their illness (e.g.",
"low mood, focus on the self, high anxiety) to talking about effects of their illness (e.g.",
"mentioning medications and therapy) and to even self-disclosing the illness.",
"This study represents an analysis of language use in users who share their mental illness though social media, in this case depression and PTSD.",
"We advocate adjusting for important underlying demographic factors, such as age and gender, to avoid confounding by language specific to these underlying characteristics.",
"The age and gender trends from the U.S. population are present in our dataset, although imperfectly, given the biases of self-reports and social media sampling.",
"Our differential language analyses show symptoms associated with these illnesses congruent with existing clinical theory and consequences of diagnoses.",
"In addition to age and gender, we focus on the important role of inferred personality in predicting 21 mental illness.",
"We show that a model which uses only the text-predicted user level 'Big Five' personality dimensions plus age and gender perform with high accuracy, comparable to methods that use standard dictionaries of psychology as features.",
"Users who self-report a diagnosis appear more neurotic and more introverted when compared to average users.",
"Data We use a dataset of Twitter users reported to suffer from a mental illness, specifically depression and post traumatic stress disorder (PTSD).",
"This dataset was first introduced in (Coppersmith et al., 2014a) .",
"The self-reports are collected by searching a large Twitter archive for disclosures using a regular expression (e.g.",
"'I have been diagnosed with depression').",
"Candidate users were filtered manually and then all their most recent tweets have been continuously crawled using the Twitter Search API.",
"The selfdisclosure messages were excluded from the dataset and from the estimation of user inferred demographics and personality scores.",
"The control users were selected at random from Twitter.",
"In total there are 370 users diagnosed only with PTSD, 483 only with depression and 1104 control users.",
"On average, each user has 3400.8 messages.",
"As Coppersmith et al.",
"(2014b) acknowledge, this method of collection is susceptible to multiple biases, but represents a simple way to build a large dataset of users and their textual information.",
"Features We use the Twitter posts of a user to infer several user traits which we expect to be relevant to mental illnesses based on standard clinical criteria (American Psychiatric Association, 2013).",
"Recently, automatic user profiling methods have used on usergenerated text and complementary features in order to predict different user traits such as: age (Nguyen et al., 2011) , gender (Sap et al., 2014) , location (Cheng et al., 2010) , impact (Lampos et al., 2014) , political preference (Volkova et al., 2014) , temporal orientation or personality (Schwartz et al., 2013) .",
"Age, Gender and Personality We use the methods developed in (Schwartz et al., 2013) to assign each user scores for age, gender and personality from the popular five factor model of personality -'Big Five ' -(McCrae and John, 1992) , which consists of five dimensions: extraversion, agreeableness, conscientiousness, neuroticism and openness to experience.",
"The model was trained on a large sample of around 70,000 Facebook users who have taken Big Five personality tests and shared their posts using a model using 1-3 grams and topics as features Schwartz et al., 2013) .",
"This model achieves R > .3 predictive performance for all five traits.",
"This dataset is also used to obtain age and gender adjusted personality and topic distributions.",
"Affect and Intensity Emotions play an important role in the diagnosis of mental illness (American Psychiatric Association, 2013) .",
"We aim to capture the expression of users' emotions through their generated posts.",
"We characterize expressions along the dimensions of affect (from positive to negative) and intensity (from low to high), which correspond to the two primary axes of the circumplex model, a well-established system for describing emotional states (Posner et al., 2005) .",
"Machine learning approaches perform significantly better at quantifying emotion/sentiment from text compared to lexicon-based methods (Pang and Lee, 2008) .",
"Emotions are expressed at message-level.",
"Consequently, we trained a text classification model on 3,000 Facebook posts labeled by affect and intensity using unigrams as features.",
"We applied this model on each user's posts and aggregated over them to obtain a user score for both dimensions.",
"Textual Features For our qualitative text analysis we extract textual features from all of a user's Twitter posts.",
"Traditional psychological studies use a closed-vocabulary approach to modelling text.",
"The most popular method is based on Linguistic Inquiry and Word Count (LIWC) .",
"In LIWC, psychological theory was used to build 64 different categories.",
"These include different parts-of-speech, topical categories and emotions.",
"Each user is thereby represented as a distribution over these categories.",
"We also use all frequent 1-3 grams (used by more than 10% of users in our dataset), where we use pointwise mutual information (PMI) to filter infrequent 2-3 grams.",
"For a better qualitative assessment and to reduce risk of overfitting, we use a set of topics as a form of dimensionality reduction.",
"We use the 2,000 clusters introduced in (Schwartz et al., 2013) obtained by applying Latent Dirichlet Allocation (Blei et al., 2003) , the most popular topic model, to a large set of Facebook posts.",
"Prediction In this section we present an analysis of the predictive power of inferred user-level features.",
"We use the methods introduced in Section 3 to predict nine user level scores: age, gender, affect, intensity and the Big Five personality traits.",
"The three populations in our dataset are used to formulate three binary classification problems in order to analyse specific pairwise group peculiarities.",
"Users having both PTSD and depression are held-out when classifying between these two classes.",
"To assess the power of our text-derived features, we use as features broader textual features such as the LIWC categories, the LDA inferred topics and frequent 1-3 grams.",
"We train binary logistic regression classifiers (Pedregosa et al., 2011) with Elastic Net regularisation (Zou and Hastie, 2005) .",
"In Table 1 we report the performance using 10-fold cross-validation.",
"Performance is measured using ROC area under the curve (ROC AUC), an adequate measure when the classes are imbalanced.",
"A more thorough study of predictive performance for identifying PTSD and depressed users is presented in (Preoţiuc-Pietro et al., 2015) .",
"Our results show the following: • Age alone improves over chance and is highly predictive when classifying PTSD users.",
"To visualise the effect of age, Figure 1 shows the probability density function in our three populations.",
"This highlights that PTSD users are consistently predicted older than both controls and depressed users.",
"This is in line with findings from the National Comorbidity Survey and replications (Kessler et al., 2005a ; Kessler et al., Figure 1 : Age density functions for each group.",
"• Gender is only weakly predictive of any mental illness, although significantly above chance in depressed vs. controls (p < .01, DeLong test 1 ).",
"Interestingly, in this task age and gender combined improve significantly above each individual prediction, illustrating they contain complementary information.",
"Consequently, at least when analysing depression, gender should be accounted for in addition to age.",
"• Personality alone obtains very good predictive accuracies, reaching over .8 ROC AUC for classifying depressed vs. PTSD.",
"In general, personality features alone perform with strong predictive accuracy, within .1 of >5000 unigram features or 2000 topics.",
"Adding age and gender information further improves predictive power (C-P p < .01, D-P p < .01, DeLong test) when PTSD is one of the compared groups.",
"In Figure 2 we show the mean personality scores across the three groups.",
"In this dataset, PTSD users score highest on average in openness with depressed users scoring lowest.",
"However, neuroticism is the largest separator between mentally ill users and the controls, with depressed having slightly higher levels of neuroticism than PTSD.",
"Neuroticism alone has an ROC AUC of .732 in prediction depression vs. control and .674 in predicting PTSD vs. control.",
"Controls score higher on extraversion, a trait related to the frequency and intensity of positive emotions (Smillie et al., 2012) .",
"Controlling for age (Figure 2b ) significantly reduces the initial association between PTSD and higher conscientiousness, because PTSD users are likely to be older, and conscientiousness tends to increase with age (Soto et al., 2011) .",
"After controlling, depressed users score lowest on conscientiousness, while PTSD and controls are close to each other.",
"• Average affect and intensity achieve modest predictive performance, although significant (C-D p < .001, D-P p < .001, DeLong test) when one of the compared groups are depressed.",
"We use the two features to map users to the emotion circumplex in Figure 3 .",
"On average, control users expressed both higher intensity and higher (i.e.",
"more positive) affect, while depressed users were lowest on both.",
"This is consistent with the lowered (i.e.",
"more negative) affect typically seen in both PTSD and depressed patients, and the increased intensity/arousal among PTSD users may correspond to more frequent expressions of anxiety, which is characterized by high arousal and lower/negative affect (American Psychiatric Association, 2013).",
"• Textual features obtain high predictive performance.",
"Out of these, LIWC performs the worst, while the topics, unigrams and 1-3 grams have similarly high performance.",
"In addition to ROC AUC scores, we present ROC curves for all three binary prediction tasks in Figures 4a, 4b and 4c .",
"ROC curves are specifically useful for medical practitioners because the classification threshold can be adjusted to choose an applicationappropriate level of false positives.",
"For comparison, we display methods using only age and gender; age, gender and personality combined, as well as LIWC and the LDA topics.",
"For classifying depressed users from controls, a true positive rate of ∼ 0.6 can be achieved at a false positive rate of ∼ 0.2 using personality, age and gender alone, with an increase to up to ∼ 0.7 when PTSD users are one of the groups.",
"When classifying PTSD users, age is the most important factor.",
"Separating between depressed and PTSD is almost exclusively a factor of age.",
"This suggests that a application in a real life scenario will likely overpredict older users to have PTSD.",
"Language Analysis The very high predictive power of the user-level features and textual features motivates us to analyse the linguistic features associated with each group, taking into account age and gender.",
"We study differences in language between groups using differential language analysis -DLA (Schwartz et al., 2013) .",
"This method aims to find all the most discriminative features between two groups by correlating each individual feature (1-3 gram or topic) to the class label.",
"In our case, age and gender are included as covariates in order to control for the effect they may have on the outcome.",
"Since a large number of features are explored, we consider coefficients significant if they meet a Bonferroni-corrected two-tailed p-value of less than 0.001.",
"Language of Depression The word cloud in Figure 5a displays the 1-3 grams that most distinguish the depressed users from the set of control users.",
"Many features show face validity (e.g.",
"'depressed'), but also appear to represent a number of the cognitive and emotional processes implicated in depression in the literature (American Psychiatric Association, 2013).",
"1-3 grams seem to disclose information relating to illness and illness management (e.g.",
"'depressed', 'illness', 'meds', 'pills', 'therapy').",
"In some of the most strongly correlated features we also observe an increased focus on the self (e.g.",
"'I', 'I am', 'I have', 'I haven't', 'I was', 'myself') which has been found to accompany depression in many studies and often accompanies states of psychological distress (Rude et al., 2004; Stirman and Pennebaker, 2001; Bucci and Freedman, 1981) .",
"Depression classically relies on the presence of two sets of core symptoms: sustained periods of low mood (dysphoria) and low interest (anhedonia) (American Psychiatric Association, 2013) .",
"Phrases such as 'cry' and 'crying' suggest low mood, while 'anymore' and 'I used to' may suggest a discontinuation of activities.",
"Suicidal ideations or more general thoughts of death and dying are symptoms used in the diagnosis of depression, and even though they are relatively rarely mentioned (grey color), are identified in the differential language analysis (e.g.",
"'suicide', 'to die').",
"Beyond what is generally thought of as the key symptoms of depression discussed above, the differential language analysis also suggests that anger and interpersonal hostility ('fucking') feature significantly in the language use of depressed users.",
"The 10 topics most associated with depression (correlation values ranging from R = .282 to R = .229) suggest similar themes, including dysphoria (e.g.",
"'lonely', 'sad', 'crying' -Figures 6b, 6c, 6f ) and thoughts of death (e.g.",
"'suicide' - Figure 6h ).",
"Figure 5 : The word clouds show the 1-3 grams most correlated with each group having a mental illness, with the set of control users serving as the contrastive set in both cases.",
"The size of the 1-3 gram is scaled by the correlation to binary depression label (point-biserial correlation).",
"The color indexes relative frequency, from grey (rarely used) through blue (moderately used) to red (frequently used).",
"Correlations are controlled for age and gender.",
"Language of PTSD The word cloud in Figure 5b and topic clouds in Figure 7 display the 1-3 grams and topics most correlated with PTSD, with topic correlation values ranging from R = .280 to R = .237.",
"On the whole, the language most predictive of PTSD does not map as cleanly onto the symptoms and criteria for diagnosis of PTSD as was the case with depression.",
"Across topics and 1-3 grams, the language most correlated with PTSD suggests 'depression', disease management (e.g.",
"'pain', 'pills', 'meds' - Figure 7c ) and a focus on the self (e.g.",
"'I had', 'I was', 'I am', 'I would').",
"Similarly, language is suggestive of death (e.g.",
"'suicide', 'suicidal').",
"Compared to the language of depressed users, themes within the language of users with PTSD appear to reference traumatic experiences that are required for a diagnosis of PTSD (e.g.",
"'murdered', 'died'), as well as the resultant states of fear-like psychological distress (e.g.",
"'terrified', 'anxiety').",
"PTSD and Depression From our predictive experiments and Figure 4c , we see that language-predicted age almost completely differentiates between PTSD and depressed users.",
"Consequently, we find only a few features that distinguish between the two groups when controlling for age.",
"To visualise differences between the diseases we visualize topic usage in both groups in Figure 8 .",
"This shows standardised usage in both groups for each topic.",
"As an additional factor (color), we include Figure 6 : The LDA topics most correlated with depression controlling for age and gender, with the set of control users serving as the contrastive set.",
"Word size is proportional to the probability of the word within the topics.",
"Color is for display only.",
"Figure 7 : The LDA topics most correlated with PTSD controlling for age and gender, with the set of control users serving as the contrastive set.",
"Word size is proportional to the probability of the word within the topics.",
"Color is for display only.",
"the personality trait of neuroticism.",
"This plays the most important role in separating between mentally ill users and controls.",
"The topics marked by arrows in Figure 8 are some of the topics most used by users with depression and PTSD shown above in Figures 6-7 .",
"Of the three topics, the topic shown in Figure 6h has 'suicide' as the most prevalent word.",
"This topic's use is elevated for both depression and PTSD.",
"Figure 6f shows a topic used mostly by depressed users, while Figure 7c highlights a topic used mainly by users with PTSD.",
"Related Work Prior studies have similarly examined the efficacy of utilising social media data, like Facebook and Twitter, to ascertain the presence of both depression and PTSD.",
"For instance, Coppersmith et al.",
"(2014b) analyse differences in patterns of language use.",
"They report that individuals with PTSD were significantly more likely to use third person pronouns and significantly less likely to use second person pronouns, without mentioning differences in the use of first person pronouns.",
"This is in contrast to the strong differences in first person pronoun use among depressed individuals documented in the literature ( Rude et al., 2004; Stirman and Pennebaker, 2001) , confirmed in prior Twitter studies (Coppersmith et al., 2014a; De Choudhury et al., 2013) and replicated here.",
"De Choudhury et al.",
"(2013) explore the relationships between social media postings and depressive status, finding that geographic variables can alter one's risk.",
"They show that cities for which the highest numbers of depressive Twitter users are predicted correlate with the cities with the known highest depression rates nationwide; depressive tweets follow an expected diurnal and annual rhythm (peaking at night and during winter); and women exhibit an increased risk of depression relative to men, consistent with known psychological trends.",
"These studies thus demonstrate the utility of using social media outlets to capture nuanced data about an individual's daily psychological affect to predict pathology, and suggest that geographic and demographic factors may alter the prevalence of psychological ill-being.",
"The present study is unique in its efforts to control for some of these demographic factors, such as personality and age, that demonstrably influence an individual's pattern of language use.",
"Further, these demographic characteristics are known to significantly alter patterns e.g.",
"pronoun use (Pennebaker, 2011) .",
"This highlights the utility of controlling for these factors when analysing pathological states like depression or PTSD.",
"Conclusions This study presented a qualitative analysis of mental illness language use in users who disclosed their diagnoses.",
"For users diagnosed with depression or PTSD, we have identified both symptoms and effects of their mental condition from user-generated content.",
"The majority of our results map to clinical theory, confirming the validity of our methodology and the relevance of the dataset.",
"In our experiments, we accounted for text-derived user features, such as demographics (e.g.",
"age, gender) and personality.",
"Text-derived personality alone showed high predictive performance, in one case reaching similar performance to using orders of magnitude more textual features.",
"Our study further demonstrated the potential for using social media as a means for predicting and analysing the linguistic markers of mental illnesses.",
"However, it also raises a few questions.",
"First, although apparently easily predictable, the difference between depressed and PTSD users is largely only due to predicted age.",
"Sample demographics also appear to be different than the general population, making predictive models fitted on this data to be susceptible to over-predicting certain demographics.",
"Secondly, the language associated with a selfreported diagnosis of depression and PTSD has a large overlap with the language predictive of personality.",
"This suggests that personality may be explanatory of a particular kind of behavior: posting about mental illness diagnoses online.",
"The mental illness labels thus acquired likely have personality confounds 'baked into them', stressing the need for using stronger ground truth such as given by clinicians.",
"Further, based on the scope of the applicationswhether screening or analysis of psychological risk factors -user-generated data should at minimum be temporally partitioned to encompass content shared before and after the diagnosis.",
"This allows one to separate mentions of symptoms from discussions of and consequences of their diagnosis, such as the use of medications.",
"28"
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"5",
"5.1",
"5.2",
"5.3",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Data",
"Features",
"Age, Gender and Personality",
"Affect and Intensity",
"Textual Features",
"Prediction",
"Language Analysis",
"Language of Depression",
"Language of PTSD",
"PTSD and Depression",
"Related Work",
"Conclusions"
]
} | GEM-SciDuet-train-27#paper-1028#slide-13 | Other features | use different word clusters
Brown clustering, NPMI Spectral clustering, Word2Vec/GloVe embeddings
linear ensemble of logistic regression classifiers
Mental Illness detection at the World Well-Being Project for the CLPsych 2015 Shared Task
D. Preotiuc-Pietro, M. Sap, H.A. Schwartz, L. Ungar | use different word clusters
Brown clustering, NPMI Spectral clustering, Word2Vec/GloVe embeddings
linear ensemble of logistic regression classifiers
Mental Illness detection at the World Well-Being Project for the CLPsych 2015 Shared Task
D. Preotiuc-Pietro, M. Sap, H.A. Schwartz, L. Ungar | [] |
GEM-SciDuet-train-27#paper-1028#slide-17 | 1028 | Proceedings of the 2nd Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal The Role of Personality, Age and Gender in Tweeting about Mental Illnesses | Mental illnesses, such as depression and post traumatic stress disorder (PTSD), are highly underdiagnosed globally. Populations sharing similar demographics and personality traits are known to be more at risk than others. In this study, we characterise the language use of users disclosing their mental illness on Twitter. Language-derived personality and demographic estimates show surprisingly strong performance in distinguishing users that tweet a diagnosis of depression or PTSD from random controls, reaching an area under the receiveroperating characteristic curve -AUC -of around .8 in all our binary classification tasks. In fact, when distinguishing users disclosing depression from those disclosing PTSD, the single feature of estimated age shows nearly as strong performance (AUC = .806) as using thousands of topics (AUC = .819) or tens of thousands of n-grams (AUC = .812). We also find that differential language analyses, controlled for demographics, recover many symptoms associated with the mental illnesses in the clinical literature. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169
],
"paper_content_text": [
"Introduction Mental illnesses, such as depression and post traumatic stress disorder (PTSD) represent a large share of the global burden of disease (Üstün et al., 2004; Mathers and Loncar, 2006) , but are underdiagnosed and undertreated around the world (Prince et al., 2007) .",
"Previous research has demonstrated the important role of demographic factors in depression risk.",
"For example, while clinically-assessed depression is estimated at 6.6% in a 12-month interval for U.S. adults , the prevalence in males is 3-5%, while the prevalence is 8-10% in females (Andrade et al., 2003) .",
"Similarly, prevalence of PTSD among U.S. adults in any 12-month period is estimated at 3.5% (Kessler et al., 2005b ) -1.8% in males and 5.2% in females -yet this risk is not distributed evenly across age groups; prevalence of PTSD increases throughout the majority of the lifespan to reach a peak of 9.2% between the ages of 49-59, before dropping sharply to 2.5% past the age of 60.",
"(Kessler et al., 2005a) .",
"Large scale user-generated content provides the opportunity to extract information not only about events, but also about the person posting them.",
"Using automatic methods, a wide set of user characteristics, such as age, gender, personality, location and income have been shown to be predictable from shared social media text.",
"The same holds for mental illnesses, from users expressing symptoms of their illness (e.g.",
"low mood, focus on the self, high anxiety) to talking about effects of their illness (e.g.",
"mentioning medications and therapy) and to even self-disclosing the illness.",
"This study represents an analysis of language use in users who share their mental illness though social media, in this case depression and PTSD.",
"We advocate adjusting for important underlying demographic factors, such as age and gender, to avoid confounding by language specific to these underlying characteristics.",
"The age and gender trends from the U.S. population are present in our dataset, although imperfectly, given the biases of self-reports and social media sampling.",
"Our differential language analyses show symptoms associated with these illnesses congruent with existing clinical theory and consequences of diagnoses.",
"In addition to age and gender, we focus on the important role of inferred personality in predicting 21 mental illness.",
"We show that a model which uses only the text-predicted user level 'Big Five' personality dimensions plus age and gender perform with high accuracy, comparable to methods that use standard dictionaries of psychology as features.",
"Users who self-report a diagnosis appear more neurotic and more introverted when compared to average users.",
"Data We use a dataset of Twitter users reported to suffer from a mental illness, specifically depression and post traumatic stress disorder (PTSD).",
"This dataset was first introduced in (Coppersmith et al., 2014a) .",
"The self-reports are collected by searching a large Twitter archive for disclosures using a regular expression (e.g.",
"'I have been diagnosed with depression').",
"Candidate users were filtered manually and then all their most recent tweets have been continuously crawled using the Twitter Search API.",
"The selfdisclosure messages were excluded from the dataset and from the estimation of user inferred demographics and personality scores.",
"The control users were selected at random from Twitter.",
"In total there are 370 users diagnosed only with PTSD, 483 only with depression and 1104 control users.",
"On average, each user has 3400.8 messages.",
"As Coppersmith et al.",
"(2014b) acknowledge, this method of collection is susceptible to multiple biases, but represents a simple way to build a large dataset of users and their textual information.",
"Features We use the Twitter posts of a user to infer several user traits which we expect to be relevant to mental illnesses based on standard clinical criteria (American Psychiatric Association, 2013).",
"Recently, automatic user profiling methods have used on usergenerated text and complementary features in order to predict different user traits such as: age (Nguyen et al., 2011) , gender (Sap et al., 2014) , location (Cheng et al., 2010) , impact (Lampos et al., 2014) , political preference (Volkova et al., 2014) , temporal orientation or personality (Schwartz et al., 2013) .",
"Age, Gender and Personality We use the methods developed in (Schwartz et al., 2013) to assign each user scores for age, gender and personality from the popular five factor model of personality -'Big Five ' -(McCrae and John, 1992) , which consists of five dimensions: extraversion, agreeableness, conscientiousness, neuroticism and openness to experience.",
"The model was trained on a large sample of around 70,000 Facebook users who have taken Big Five personality tests and shared their posts using a model using 1-3 grams and topics as features Schwartz et al., 2013) .",
"This model achieves R > .3 predictive performance for all five traits.",
"This dataset is also used to obtain age and gender adjusted personality and topic distributions.",
"Affect and Intensity Emotions play an important role in the diagnosis of mental illness (American Psychiatric Association, 2013) .",
"We aim to capture the expression of users' emotions through their generated posts.",
"We characterize expressions along the dimensions of affect (from positive to negative) and intensity (from low to high), which correspond to the two primary axes of the circumplex model, a well-established system for describing emotional states (Posner et al., 2005) .",
"Machine learning approaches perform significantly better at quantifying emotion/sentiment from text compared to lexicon-based methods (Pang and Lee, 2008) .",
"Emotions are expressed at message-level.",
"Consequently, we trained a text classification model on 3,000 Facebook posts labeled by affect and intensity using unigrams as features.",
"We applied this model on each user's posts and aggregated over them to obtain a user score for both dimensions.",
"Textual Features For our qualitative text analysis we extract textual features from all of a user's Twitter posts.",
"Traditional psychological studies use a closed-vocabulary approach to modelling text.",
"The most popular method is based on Linguistic Inquiry and Word Count (LIWC) .",
"In LIWC, psychological theory was used to build 64 different categories.",
"These include different parts-of-speech, topical categories and emotions.",
"Each user is thereby represented as a distribution over these categories.",
"We also use all frequent 1-3 grams (used by more than 10% of users in our dataset), where we use pointwise mutual information (PMI) to filter infrequent 2-3 grams.",
"For a better qualitative assessment and to reduce risk of overfitting, we use a set of topics as a form of dimensionality reduction.",
"We use the 2,000 clusters introduced in (Schwartz et al., 2013) obtained by applying Latent Dirichlet Allocation (Blei et al., 2003) , the most popular topic model, to a large set of Facebook posts.",
"Prediction In this section we present an analysis of the predictive power of inferred user-level features.",
"We use the methods introduced in Section 3 to predict nine user level scores: age, gender, affect, intensity and the Big Five personality traits.",
"The three populations in our dataset are used to formulate three binary classification problems in order to analyse specific pairwise group peculiarities.",
"Users having both PTSD and depression are held-out when classifying between these two classes.",
"To assess the power of our text-derived features, we use as features broader textual features such as the LIWC categories, the LDA inferred topics and frequent 1-3 grams.",
"We train binary logistic regression classifiers (Pedregosa et al., 2011) with Elastic Net regularisation (Zou and Hastie, 2005) .",
"In Table 1 we report the performance using 10-fold cross-validation.",
"Performance is measured using ROC area under the curve (ROC AUC), an adequate measure when the classes are imbalanced.",
"A more thorough study of predictive performance for identifying PTSD and depressed users is presented in (Preoţiuc-Pietro et al., 2015) .",
"Our results show the following: • Age alone improves over chance and is highly predictive when classifying PTSD users.",
"To visualise the effect of age, Figure 1 shows the probability density function in our three populations.",
"This highlights that PTSD users are consistently predicted older than both controls and depressed users.",
"This is in line with findings from the National Comorbidity Survey and replications (Kessler et al., 2005a ; Kessler et al., Figure 1 : Age density functions for each group.",
"• Gender is only weakly predictive of any mental illness, although significantly above chance in depressed vs. controls (p < .01, DeLong test 1 ).",
"Interestingly, in this task age and gender combined improve significantly above each individual prediction, illustrating they contain complementary information.",
"Consequently, at least when analysing depression, gender should be accounted for in addition to age.",
"• Personality alone obtains very good predictive accuracies, reaching over .8 ROC AUC for classifying depressed vs. PTSD.",
"In general, personality features alone perform with strong predictive accuracy, within .1 of >5000 unigram features or 2000 topics.",
"Adding age and gender information further improves predictive power (C-P p < .01, D-P p < .01, DeLong test) when PTSD is one of the compared groups.",
"In Figure 2 we show the mean personality scores across the three groups.",
"In this dataset, PTSD users score highest on average in openness with depressed users scoring lowest.",
"However, neuroticism is the largest separator between mentally ill users and the controls, with depressed having slightly higher levels of neuroticism than PTSD.",
"Neuroticism alone has an ROC AUC of .732 in prediction depression vs. control and .674 in predicting PTSD vs. control.",
"Controls score higher on extraversion, a trait related to the frequency and intensity of positive emotions (Smillie et al., 2012) .",
"Controlling for age (Figure 2b ) significantly reduces the initial association between PTSD and higher conscientiousness, because PTSD users are likely to be older, and conscientiousness tends to increase with age (Soto et al., 2011) .",
"After controlling, depressed users score lowest on conscientiousness, while PTSD and controls are close to each other.",
"• Average affect and intensity achieve modest predictive performance, although significant (C-D p < .001, D-P p < .001, DeLong test) when one of the compared groups are depressed.",
"We use the two features to map users to the emotion circumplex in Figure 3 .",
"On average, control users expressed both higher intensity and higher (i.e.",
"more positive) affect, while depressed users were lowest on both.",
"This is consistent with the lowered (i.e.",
"more negative) affect typically seen in both PTSD and depressed patients, and the increased intensity/arousal among PTSD users may correspond to more frequent expressions of anxiety, which is characterized by high arousal and lower/negative affect (American Psychiatric Association, 2013).",
"• Textual features obtain high predictive performance.",
"Out of these, LIWC performs the worst, while the topics, unigrams and 1-3 grams have similarly high performance.",
"In addition to ROC AUC scores, we present ROC curves for all three binary prediction tasks in Figures 4a, 4b and 4c .",
"ROC curves are specifically useful for medical practitioners because the classification threshold can be adjusted to choose an applicationappropriate level of false positives.",
"For comparison, we display methods using only age and gender; age, gender and personality combined, as well as LIWC and the LDA topics.",
"For classifying depressed users from controls, a true positive rate of ∼ 0.6 can be achieved at a false positive rate of ∼ 0.2 using personality, age and gender alone, with an increase to up to ∼ 0.7 when PTSD users are one of the groups.",
"When classifying PTSD users, age is the most important factor.",
"Separating between depressed and PTSD is almost exclusively a factor of age.",
"This suggests that a application in a real life scenario will likely overpredict older users to have PTSD.",
"Language Analysis The very high predictive power of the user-level features and textual features motivates us to analyse the linguistic features associated with each group, taking into account age and gender.",
"We study differences in language between groups using differential language analysis -DLA (Schwartz et al., 2013) .",
"This method aims to find all the most discriminative features between two groups by correlating each individual feature (1-3 gram or topic) to the class label.",
"In our case, age and gender are included as covariates in order to control for the effect they may have on the outcome.",
"Since a large number of features are explored, we consider coefficients significant if they meet a Bonferroni-corrected two-tailed p-value of less than 0.001.",
"Language of Depression The word cloud in Figure 5a displays the 1-3 grams that most distinguish the depressed users from the set of control users.",
"Many features show face validity (e.g.",
"'depressed'), but also appear to represent a number of the cognitive and emotional processes implicated in depression in the literature (American Psychiatric Association, 2013).",
"1-3 grams seem to disclose information relating to illness and illness management (e.g.",
"'depressed', 'illness', 'meds', 'pills', 'therapy').",
"In some of the most strongly correlated features we also observe an increased focus on the self (e.g.",
"'I', 'I am', 'I have', 'I haven't', 'I was', 'myself') which has been found to accompany depression in many studies and often accompanies states of psychological distress (Rude et al., 2004; Stirman and Pennebaker, 2001; Bucci and Freedman, 1981) .",
"Depression classically relies on the presence of two sets of core symptoms: sustained periods of low mood (dysphoria) and low interest (anhedonia) (American Psychiatric Association, 2013) .",
"Phrases such as 'cry' and 'crying' suggest low mood, while 'anymore' and 'I used to' may suggest a discontinuation of activities.",
"Suicidal ideations or more general thoughts of death and dying are symptoms used in the diagnosis of depression, and even though they are relatively rarely mentioned (grey color), are identified in the differential language analysis (e.g.",
"'suicide', 'to die').",
"Beyond what is generally thought of as the key symptoms of depression discussed above, the differential language analysis also suggests that anger and interpersonal hostility ('fucking') feature significantly in the language use of depressed users.",
"The 10 topics most associated with depression (correlation values ranging from R = .282 to R = .229) suggest similar themes, including dysphoria (e.g.",
"'lonely', 'sad', 'crying' -Figures 6b, 6c, 6f ) and thoughts of death (e.g.",
"'suicide' - Figure 6h ).",
"Figure 5 : The word clouds show the 1-3 grams most correlated with each group having a mental illness, with the set of control users serving as the contrastive set in both cases.",
"The size of the 1-3 gram is scaled by the correlation to binary depression label (point-biserial correlation).",
"The color indexes relative frequency, from grey (rarely used) through blue (moderately used) to red (frequently used).",
"Correlations are controlled for age and gender.",
"Language of PTSD The word cloud in Figure 5b and topic clouds in Figure 7 display the 1-3 grams and topics most correlated with PTSD, with topic correlation values ranging from R = .280 to R = .237.",
"On the whole, the language most predictive of PTSD does not map as cleanly onto the symptoms and criteria for diagnosis of PTSD as was the case with depression.",
"Across topics and 1-3 grams, the language most correlated with PTSD suggests 'depression', disease management (e.g.",
"'pain', 'pills', 'meds' - Figure 7c ) and a focus on the self (e.g.",
"'I had', 'I was', 'I am', 'I would').",
"Similarly, language is suggestive of death (e.g.",
"'suicide', 'suicidal').",
"Compared to the language of depressed users, themes within the language of users with PTSD appear to reference traumatic experiences that are required for a diagnosis of PTSD (e.g.",
"'murdered', 'died'), as well as the resultant states of fear-like psychological distress (e.g.",
"'terrified', 'anxiety').",
"PTSD and Depression From our predictive experiments and Figure 4c , we see that language-predicted age almost completely differentiates between PTSD and depressed users.",
"Consequently, we find only a few features that distinguish between the two groups when controlling for age.",
"To visualise differences between the diseases we visualize topic usage in both groups in Figure 8 .",
"This shows standardised usage in both groups for each topic.",
"As an additional factor (color), we include Figure 6 : The LDA topics most correlated with depression controlling for age and gender, with the set of control users serving as the contrastive set.",
"Word size is proportional to the probability of the word within the topics.",
"Color is for display only.",
"Figure 7 : The LDA topics most correlated with PTSD controlling for age and gender, with the set of control users serving as the contrastive set.",
"Word size is proportional to the probability of the word within the topics.",
"Color is for display only.",
"the personality trait of neuroticism.",
"This plays the most important role in separating between mentally ill users and controls.",
"The topics marked by arrows in Figure 8 are some of the topics most used by users with depression and PTSD shown above in Figures 6-7 .",
"Of the three topics, the topic shown in Figure 6h has 'suicide' as the most prevalent word.",
"This topic's use is elevated for both depression and PTSD.",
"Figure 6f shows a topic used mostly by depressed users, while Figure 7c highlights a topic used mainly by users with PTSD.",
"Related Work Prior studies have similarly examined the efficacy of utilising social media data, like Facebook and Twitter, to ascertain the presence of both depression and PTSD.",
"For instance, Coppersmith et al.",
"(2014b) analyse differences in patterns of language use.",
"They report that individuals with PTSD were significantly more likely to use third person pronouns and significantly less likely to use second person pronouns, without mentioning differences in the use of first person pronouns.",
"This is in contrast to the strong differences in first person pronoun use among depressed individuals documented in the literature ( Rude et al., 2004; Stirman and Pennebaker, 2001) , confirmed in prior Twitter studies (Coppersmith et al., 2014a; De Choudhury et al., 2013) and replicated here.",
"De Choudhury et al.",
"(2013) explore the relationships between social media postings and depressive status, finding that geographic variables can alter one's risk.",
"They show that cities for which the highest numbers of depressive Twitter users are predicted correlate with the cities with the known highest depression rates nationwide; depressive tweets follow an expected diurnal and annual rhythm (peaking at night and during winter); and women exhibit an increased risk of depression relative to men, consistent with known psychological trends.",
"These studies thus demonstrate the utility of using social media outlets to capture nuanced data about an individual's daily psychological affect to predict pathology, and suggest that geographic and demographic factors may alter the prevalence of psychological ill-being.",
"The present study is unique in its efforts to control for some of these demographic factors, such as personality and age, that demonstrably influence an individual's pattern of language use.",
"Further, these demographic characteristics are known to significantly alter patterns e.g.",
"pronoun use (Pennebaker, 2011) .",
"This highlights the utility of controlling for these factors when analysing pathological states like depression or PTSD.",
"Conclusions This study presented a qualitative analysis of mental illness language use in users who disclosed their diagnoses.",
"For users diagnosed with depression or PTSD, we have identified both symptoms and effects of their mental condition from user-generated content.",
"The majority of our results map to clinical theory, confirming the validity of our methodology and the relevance of the dataset.",
"In our experiments, we accounted for text-derived user features, such as demographics (e.g.",
"age, gender) and personality.",
"Text-derived personality alone showed high predictive performance, in one case reaching similar performance to using orders of magnitude more textual features.",
"Our study further demonstrated the potential for using social media as a means for predicting and analysing the linguistic markers of mental illnesses.",
"However, it also raises a few questions.",
"First, although apparently easily predictable, the difference between depressed and PTSD users is largely only due to predicted age.",
"Sample demographics also appear to be different than the general population, making predictive models fitted on this data to be susceptible to over-predicting certain demographics.",
"Secondly, the language associated with a selfreported diagnosis of depression and PTSD has a large overlap with the language predictive of personality.",
"This suggests that personality may be explanatory of a particular kind of behavior: posting about mental illness diagnoses online.",
"The mental illness labels thus acquired likely have personality confounds 'baked into them', stressing the need for using stronger ground truth such as given by clinicians.",
"Further, based on the scope of the applicationswhether screening or analysis of psychological risk factors -user-generated data should at minimum be temporally partitioned to encompass content shared before and after the diagnosis.",
"This allows one to separate mentions of symptoms from discussions of and consequences of their diagnosis, such as the use of medications.",
"28"
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"5",
"5.1",
"5.2",
"5.3",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Data",
"Features",
"Age, Gender and Personality",
"Affect and Intensity",
"Textual Features",
"Prediction",
"Language Analysis",
"Language of Depression",
"Language of PTSD",
"PTSD and Depression",
"Related Work",
"Conclusions"
]
} | GEM-SciDuet-train-27#paper-1028#slide-17 | Take Home | Control the analysis for age & gender
Personality plays an important role in mental illnesses
Language use of depressed/PTSD reveals symptoms, emotions, and cognitive processes. | Control the analysis for age & gender
Personality plays an important role in mental illnesses
Language use of depressed/PTSD reveals symptoms, emotions, and cognitive processes. | [] |
GEM-SciDuet-train-28#paper-1035#slide-0 | 1035 | Multi-Passage Machine Reading Comprehension with Cross-Passage Answer Verification | Machine reading comprehension (MRC) on real web data usually requires the machine to answer a question by analyzing multiple passages retrieved by search engine. Compared with MRC on a single passage, multi-passage MRC is more challenging, since we are likely to get multiple confusing answer candidates from different passages. To address this problem, we propose an end-to-end neural model that enables those answer candidates from different passages to verify each other based on their content representations. Specifically, we jointly train three modules that can predict the final answer based on three factors: the answer boundary, the answer content and the cross-passage answer verification. The experimental results show that our method outperforms the baseline by a large margin and achieves the state-of-the-art performance on the English MS-MARCO dataset and the Chinese DuReader dataset, both of which are designed for MRC in real-world settings. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231
],
"paper_content_text": [
"Introduction Machine reading comprehension (MRC), empowering computers with the ability to acquire knowledge and answer questions from textual data, is believed to be a crucial step in building a general intelligent agent (Chen et al., 2016) .",
"Recent years have seen rapid growth in the MRC community.",
"With the release of various datasets, the MRC task has evolved from the early cloze-style test (Hermann et al., 2015; Hill et al., 2015) to answer extraction from a single passage (Rajpurkar et al., 2016) and to the latest more complex question answering on web data (Nguyen et al., 2016; Dunn et al., 2017; He et al., 2017) .",
"Great efforts have also been made to develop models for these MRC tasks , especially for the answer extraction on single passage (Wang and Jiang, 2016; Seo et al., 2016; Pan et al., 2017) .",
"A significant milestone is that several MRC models have exceeded the performance of human annotators on the SQuAD dataset 1 (Rajpurkar et al., 2016 ).",
"However, this success on single Wikipedia passage is still not adequate, considering the ultimate goal of reading the whole web.",
"Therefore, several latest datasets (Nguyen et al., 2016; He et al., 2017; Dunn et al., 2017) attempt to design the MRC tasks in more realistic settings by involving search engines.",
"For each question, they use the search engine to retrieve multiple passages and the MRC models are required to read these passages in order to give the final answer.",
"One of the intrinsic challenges for such multipassage MRC is that since all the passages are question-related but usually independently written, it's probable that multiple confusing answer candidates (correct or incorrect) exist.",
"Table 1 shows an example from MS-MARCO.",
"We can see that all the answer candidates have semantic matching with the question while they are literally different and some of them are even incorrect.",
"As is shown by Jia and Liang (2017) , these confusing answer candidates could be quite difficult for MRC models to distinguish.",
"Therefore, special consideration is required for such multi-passage MRC problem.",
"In this paper, we propose to leverage the answer candidates from different passages to verify the final correct answer and rule out the noisy incorrect answers.",
"Our hypothesis is that the cor-Question: What is the difference between a mixed and pure culture?",
"Passages: [1] A culture is a society's total way of living and a society is a group that live in a defined territory and participate in common culture.",
"While the answer given is in essence true, societies originally form for the express purpose to enhance .",
".",
".",
"[2] .",
".",
".",
"There has been resurgence in the economic system known as capitalism during the past two decades.",
"4.",
"The mixed economy is a balance between socialism and capitalism.",
"As a result, some institutions are owned and maintained by .",
".",
".",
"[3] A pure culture is one in which only one kind of microbial species is found whereas in mixed culture two or more microbial species formed colonies.",
"Culture on the other hand, is the lifestyle that the people in the country .",
".",
".",
"[4] Best Answer: A pure culture comprises a single species or strains.",
"A mixed culture is taken from a source and may contain multiple strains or species.",
"A contaminated culture contains organisms that derived from some place .",
".",
".",
"[5] .",
".",
".",
"It will be at that time when we can truly obtain a pure culture.",
"A pure culture is a culture consisting of only one strain.",
"You can obtain a pure culture by picking out a small portion of the mixed culture .",
".",
".",
"[6] A pure culture is one in which only one kind of microbial species is found whereas in mixed culture two or more microbial species formed colonies.",
"A pure culture is a culture consisting of only one strain.",
".",
".",
".",
"· · · · · · Reference Answer: A pure culture is one in which only one kind of microbial species is found whereas in mixed culture two or more microbial species formed colonies.",
"rect answers could occur more frequently in those passages and usually share some commonalities, while incorrect answers are usually different from one another.",
"The example in Table 1 demonstrates this phenomenon.",
"We can see that the answer candidates extracted from the last four passages are all valid answers to the question and they are semantically similar to each other, while the answer candidates from the other two passages are incorrect and there is no supportive information from other passages.",
"As human beings usually compare the answer candidates from different sources to deduce the final answer, we hope that MRC model can also benefit from the cross-passage answer verification process.",
"The overall framework of our model is demonstrated in Figure 1 , which consists of three modules.",
"First, we follow the boundary-based MRC models (Seo et al., 2016; Wang and Jiang, 2016) to find an answer candidate for each passage by identifying the start and end position of the answer ( Figure 2) .",
"Second, we model the meanings of the answer candidates extracted from those passages and use the content scores to measure the quality of the candidates from a second perspective.",
"Third, we conduct the answer verification by enabling each answer candidate to attend to the other candidates based on their representations.",
"We hope that the answer candidates can collect supportive information from each other according to their semantic similarities and further decide whether each candidate is correct or not.",
"Therefore, the final answer is determined by three factors: the boundary, the content and the crosspassage answer verification.",
"The three steps are modeled using different modules, which can be jointly trained in our end-to-end framework.",
"We conduct extensive experiments on the MS-MARCO (Nguyen et al., 2016) and DuReader (He et al., 2017) datasets.",
"The results show that our answer verification MRC model outperforms the baseline models by a large margin and achieves the state-of-the-art performance on both datasets.",
"Figure 1 gives an overview of our multi-passage MRC model which is mainly composed of three modules including answer boundary prediction, answer content modeling and answer verification.",
"First of all, we need to model the question and passages.",
"Following Seo et al.",
"(2016) , we compute the question-aware representation for each passage (Section 2.1).",
"Based on this representation, we employ a Pointer Network (Vinyals et al., 2015) to predict the start and end position of the answer in the module of answer boundary prediction (Section 2.2).",
"At the same time, with the answer content model (Section 2.3), we estimate whether each word should be included in the answer and thus obtain the answer representations.",
"Next, in the answer verification module (Section 2.4), each answer candidate can attend to the other answer candidates to collect supportive information and we compute one score for each candidate Figure 1 : Overview of our method for multi-passage machine reading comprehension to indicate whether it is correct or not according to the verification.",
"The final answer is determined by not only the boundary but also the answer content and its verification score (Section 2.5).",
"Our Approach Question and Passage Modeling Given a question Q and a set of passages {P i } retrieved by search engines, our task is to find the best concise answer to the question.",
"First, we formally present the details of modeling the question and passages.",
"Encoding We first map each word into the vector space by concatenating its word embedding and sum of its character embeddings.",
"Then we employ bi-directional LSTMs (BiLSTM) to encode the question Q and passages {P i } as follows: u Q t = BiLSTM Q (u Q t−1 , [e Q Q-P Matching One essential step in MRC is to match the question with passages so that important information can be highlighted.",
"We use the Attention Flow Layer (Seo et al., 2016) to conduct the Q-P matching in two directions.",
"The similarity matrix S ∈ R |Q|×|P i | between the question and passage i is changed to a simpler version, where the similarity between the t th word in the question and the k th word in passage i is computed as: S t,k = u Q t · u P i k (3) Then the context-to-question attention and question-to-context attention is applied strictly following Seo et al.",
"(2016) to obtain the questionaware passage representation {ũ P i t }.",
"We do not give the details here due to space limitation.",
"Next, another BiLSTM is applied in order to fuse the contextual information and get the new representation for each word in the passage, which is regarded as the match output: v P i t = BiLSTM M (v P i t−1 ,ũ P i t ) (4) Based on the passage representations, we introduce the three main modules of our model.",
"Answer Boundary Prediction To extract the answer span from passages, mainstream studies try to locate the boundary of the answer, which is called boundary model.",
"Following (Wang and Jiang, 2016) , we employ Pointer Network (Vinyals et al., 2015) to compute the probability of each word to be the start or end position of the span: g t k = w a 1 tanh(W a 2 [v P k , h a t−1 ]) (5) α t k = exp(g t k )/ |P| j=1 exp(g t j ) (6) c t = |P| k=1 α t k v P k (7) h a t = LSTM(h a t−1 , c t ) (8) By utilizing the attention weights, the probability of the k th word in the passage to be the start and end position of the answer is obtained as α 1 k and α 2 k .",
"It should be noted that the pointer network is applied to the concatenation of all passages, which is denoted as P so that the probabilities are comparable across passages.",
"This boundary model can be trained by minimizing the negative log probabilities of the true start and end indices: L boundary = − 1 N N i=1 (log α 1 y 1 i + log α 2 y 2 i ) (9) where N is the number of samples in the dataset and y 1 i , y 2 i are the gold start and end positions.",
"Answer Content Modeling Previous work employs the boundary model to find the text span with the maximum boundary score as the final answer.",
"However, in our context, besides locating the answer candidates, we also need to model their meanings in order to conduct the verification.",
"An intuitive method is to compute the representation of the answer candidates separately after extracting them, but it could be hard to train such model end-to-end.",
"Here, we propose a novel method that can obtain the representation of the answer candidates based on probabilities.",
"Specifically, we change the output layer of the classic MRC model.",
"Besides predicting the boundary probabilities for the words in the passages, we also predict whether each word should be included in the content of the answer.",
"The content probability of the k th word is computed as: p c k = sigmoid(w c 1 ReLU(W c 2 v P i k )) (10) Training this content model is also quite intuitive.",
"We transform the boundary labels into a continuous segment, which means the words within the answer span will be labeled as 1 and other words will be labeled as 0.",
"In this way, we define the loss function as the averaged cross entropy: L content = − 1 N 1 |P| N i=1 |P | j=1 [y c k log p c k + (1 − y c k ) log(1 − p c k )] (11) The content probabilities provide another view to measure the quality of the answer in addition to the boundary.",
"Moreover, with these probabilities, we can represent the answer from passage i as a weighted sum of all the word embeddings in this passage: r A i = 1 |P i | |P i | k=1 p c k [e P i k , c P i k ] (12) Cross-Passage Answer Verification The boundary model and the content model focus on extracting and modeling the answer within a single passage respectively, with little consideration of the cross-passage information.",
"However, as is discussed in Section 1, there could be multiple answer candidates from different passages and some of them may mislead the MRC model to make an incorrect prediction.",
"It's necessary to aggregate the information from different passages and choose the best one from those candidates.",
"Therefore, we propose a method to enable the answer candidates to exchange information and verify each other through the cross-passage answer verification process.",
"Given the representation of the answer candidates from all passages {r A i }, each answer candidate then attends to other candidates to collect supportive information via attention mechanism: s i,j = 0, if i = j, r A i · r A j , otherwise (13) α i,j = exp(s i,j )/ n k=1 exp(s i,k ) (14) r A i = n j=1 α i,j r A j (15) Herer A i is the collected verification information from other passages based on the attention weights.",
"Then we pass it together with the original representation r A i to a fully connected layer: g v i = w v [r A i ,r A i , r A i r A i ] (16) We further normalize these scores over all passages to get the verification score for answer candidate A i : p v i = exp(g v i )/ n j=1 exp(g v j ) (17) In order to train this verification model, we take the answer from the gold passage as the gold answer.",
"And the loss function can be formulated as the negative log probability of the correct answer: L verif y = − 1 N N i=1 log p v y v i (18) where y v i is the index of the correct answer in all the answer candidates of the i th instance .",
"Joint Training and Prediction As is described above, we define three objectives for the reading comprehension model over multiple passages: 1. finding the boundary of the answer; 2. predicting whether each word should be included in the content; 3. selecting the best answer via cross-passage answer verification.",
"According to our design, these three tasks can share the same embedding, encoding and matching layers.",
"Therefore, we propose to train them together as multi-task learning (Ruder, 2017) .",
"The joint objective function is formulated as follows: L = L boundary + β 1 L content + β 2 L verif y (19) where β 1 and β 2 are two hyper-parameters that control the weights of those tasks.",
"When predicting the final answer, we take the boundary score, content score and verification score into consideration.",
"We first extract the answer candidate A i that has the maximum boundary score from each passage i.",
"This boundary score is computed as the product of the start and end probability of the answer span.",
"Then for each answer candidate A i , we average the content probabilities of all its words as the content score of A i .",
"And we can also predict the verification score for A i using the verification model.",
"Therefore, the final answer can be selected from all the answer candidates according to the product of these three scores.",
"Experiments To verify the effectiveness of our model on multipassage machine reading comprehension, we conduct experiments on the MS-MARCO (Nguyen et al., 2016) and DuReader (He et al., 2017) datasets.",
"Our method achieves the state-of-the-art performance on both datasets.",
"Datasets We choose the MS-MARCO and DuReader datasets to test our method, since both of them are One prerequisite for answer verification is that there should be multiple correct answers so that they can verify each other.",
"Both the MS-MARCO and DuReader datasets require the human annotators to generate multiple answers if possible.",
"Table 2 shows the proportion of questions that have multiple answers.",
"However, the same answer that occurs many times is treated as one single answer here.",
"Therefore, we also report the proportion of questions that have multiple answer spans to match with the human-generated answers.",
"A span is taken as valid if it can achieve F1 score larger than 0.7 compared with any reference answer.",
"From these statistics, we can see that the phenomenon of multiple answers is quite common for both MS-MARCO and DuReader.",
"These answers will provide strong signals for answer verification if we can leverage them properly.",
"Implementation Details For MS-MARCO, we preprocess the corpus with the reversible tokenizer from Stanford CoreNLP and we choose the span that achieves the highest ROUGE-L score with the reference answers as the gold span for training.",
"We employ the 300-D pre-trained Glove embeddings (Pennington et al., 2014) and keep it fixed during training.",
"The character embeddings are randomly initialized with its dimension as 30.",
"For DuReader, we follow the preprocessing described in He et al.",
"(2017) .",
"We tune the hyper-parameters according to the Model ROUGE-L BLEU-1 FastQA Ext (Weissenborn et al., 2017) 33.67 33.93 Prediction (Wang and Jiang, 2016) 37.33 40.72 ReasoNet (Shen et al., 2017) 38.81 39.86 R-Net (Wang et al., 2017c) 42.89 42.22 S-Net (Tan et al., 2017) 45 Two simple yet effective technologies are employed to improve the final performance on these two datasets respectively.",
"For MS-MARCO, approximately 8% questions have the answers as Yes or No, which usually cannot be solved by extractive approach (Tan et al., 2017) .",
"We address this problem by training a simple Yes/No classifier for those questions with certain patterns (e.g., starting with \"is\").",
"Concretely, we simply change the output layer of the basic boundary model so that it can predict whether the answer is \"Yes\" or \"No\".",
"For DuReader, the retrieved document usually contains a large number of paragraphs that cannot be fed into MRC models directly (He et al., 2017) .",
"The original paper employs a simple a simple heuristic strategy to select a representative paragraph for each document, while we train a paragraph ranking model for this.",
"We will demonstrate the effects of these two technologies later.",
"Table 3 shows the results of our system and other state-of-the-art models on the MS-MARCO test set.",
"We adopt the official evaluation metrics, including ROUGE-L (Lin, 2004) and BLEU-1 (Papineni et al., 2002) .",
"As we can see, for both metrics, our single model outperforms all the other competing models with an evident margin, which is extremely hard considering the near-human per- formance.",
"If we ensemble the models trained with different random seeds and hyper-parameters, the results can be further improved and outperform the ensemble model in Tan et al.",
"(2017) , especially in terms of the BLEU-1.",
"Results on MS-MARCO Results on DuReader The results of our model and several baseline systems on the test set of DuReader are shown in Table 4 .",
"The BiDAF and Match-LSTM models are provided as two baseline systems (He et al., 2017) .",
"Based on BiDAF, as is described in Section 3.2, we tried a new paragraph selection strategy by employing a paragraph ranking (PR) model.",
"We can see that this paragraph ranking can boost the BiDAF baseline significantly.",
"Finally, we implement our system based on this new strategy, and our system (single model) achieves further improvement by a large margin.",
"Question: What is the difference between a mixed and pure culture Scores Answer Candidates: Boundary Content Verification [1] A culture is a society's total way of living and a society is a group .",
".",
".",
"1.0 × 10 −2 1.0 × 10 −1 1.1 × 10 −1 [2] The mixed economy is a balance between socialism and capitalism.",
"1.0 × 10 −4 4.0 × 10 −2 3.2 × 10 −2 [3] A pure culture is one in which only one kind of microbial species is .",
".",
".",
"5.5 × 10 −3 7.7 × 10 −2 1.2 × 10 −1 [4] A pure culture comprises a single species or strains.",
"A mixed .",
".",
".",
"2.7 × 10 −3 8.1 × 10 −2 1.3 × 10 −1 [5] A pure culture is a culture consisting of only one strain.",
"5.8 × 10 −4 7.9 × 10 −2 5.1 × 10 −2 [6] A pure culture is one in which only one kind of microbial species .",
".",
".",
"5.8 × 10 −3 9.1 × 10 −2 2.7 × 10 −1 .",
".",
".",
".",
".",
".",
".",
".",
".",
".",
".",
".",
"Analysis and Discussion Ablation Study To get better insight into our system, we conduct in-depth ablation study on the development set of MS-MARCO, which is shown in Table 5 .",
"Following Tan et al.",
"(2017) , we mainly focus on the ROUGE-L score that is averaged case by case.",
"We first evaluate the answer verification by ablating the cross-passage verification model so that the verification loss and verification score will not be used during training and testing.",
"Then we remove the content model in order to test the necessity of modeling the content of the answer.",
"Since we don't have the content scores, we use the boundary probabilities instead to compute the answer representation for verification.",
"Next, to show the benefits of joint training, we train the boundary model separately from the other two models.",
"Finally, we remove the yes/no classification in order to show the real improvement of our end-toend model compared with the baseline method that predicts the answer with only the boundary model.",
"From Table 5 , we can see that the answer verification makes a great contribution to the overall improvement, which confirms our hypothesis that cross-passage answer verification is useful for the multi-passage MRC.",
"For the ablation of the content model, we analyze that it will not only affect the content score itself, but also violate the verification model since the content probabilities are necessary for the answer representation, which will be further analyzed in Section 4.3.",
"Another discovery is that jointly training the three models can provide great benefits, which shows that the three tasks are actually closely related and can boost each other with shared representations at bottom layers.",
"At last, comparing our method with the baseline, we achieve an improvement of nearly 3 points without the yes/no classification.",
"This significant improvement proves the effectiveness of our approach.",
"Case Study To demonstrate how each module of our model takes effect when predicting the final answer, we conduct a case study in Table 6 with the same example that we discussed in Section 1.",
"For each answer candidate, we list three scores predicted by the boundary model, content model and verification model respectively.",
"On the one hand, we can see that these three scores generally have some relevance.",
"For example, the second candidate is given lowest scores by all the three models.",
"We analyze that this is because the models share the same encoding and matching layers at bottom level and this relevance guarantees that the content and verification models will not violate the boundary model too much.",
"On the other hand, we also see that the verification score can really make a difference here when the boundary model makes an incorrect decision among the confusing answer candidates ([1], [3], [4], [6] ).",
"Besides, as we expected, the verification model tends to give higher scores for those answers that have semantic commonality with each other ([3] , [4], [6] ), which are all valid answers in this case.",
"By multiplying the three scores, our model finally predicts the answer correctly.",
"Necessity of the Content Model In our model, we compute the answer representation based on the content probabilities predicted by a separate content model instead of directly using the boundary probabilities.",
"We argue that this content model is necessary for our answer verification process.",
"Figure 2 plots the predicted content probabilities as well as the boundary probabilities The noun charge unit has 1 sense : 1 .",
"a measure of the quantity of electricity -LRB-determined by the amount of an electric current and the time for which it flows -RRB-.",
"familiarity info : charge unit used as a noun is very rare .",
"start probability end probability content probability Figure 2 : The boundary probabilities and content probabilities for the words in a passage for a passage.",
"We can see that the boundary and content probabilities capture different aspects of the answer.",
"Since answer candidates usually have similar boundary words, if we compute the answer representation based on the boundary probabilities, it's difficult to model the real difference among different answer candidates.",
"On the contrary, with the content probabilities, we pay more attention to the content part of the answer, which can provide more distinguishable information for verifying the correct answer.",
"Furthermore, the content probabilities can also adjust the weights of the words within the answer span so that unimportant words (e.g.",
"\"and\" and \".\")",
"get lower weights in the final answer representation.",
"We believe that this refined representation is also good for the answer verification process.",
"Related Work Machine reading comprehension made rapid progress in recent years, especially for singlepassage MRC task, such as SQuAD (Rajpurkar et al., 2016) .",
"Mainstream studies (Seo et al., 2016; Wang and Jiang, 2016; Xiong et al., 2016) treat reading comprehension as extracting answer span from the given passage, which is usually achieved by predicting the start and end position of the answer.",
"We implement our boundary model similarly by employing the boundary-based pointer network (Wang and Jiang, 2016) .",
"Another inspiring work is from Wang et al.",
"(2017c) , where the authors propose to match the passage against itself so that the representation can aggregate evidence from the whole passage.",
"Our verification model adopts a similar idea.",
"However, we collect information across passages and our attention is based on the answer representation, which is much more efficient than attention over all passages.",
"For the model training, Xiong et al.",
"(2017) argues that the boundary loss encourages exact answers at the cost of penalizing overlapping answers.",
"Therefore they propose a mixed objective that incorporates rewards derived from word overlap.",
"Our joint training approach has a similar function.",
"By taking the content and verification loss into consideration, our model will give less loss for overlapping answers than those unmatched answers, and our loss function is totally differentiable.",
"Recently, we also see emerging interests in multi-passage MRC from both the academic (Dunn et al., 2017; Joshi et al., 2017) and industrial community (Nguyen et al., 2016; He et al., 2017) .",
"Early studies (Shen et al., 2017; Wang et al., 2017c) usually concat those passages and employ the same models designed for singlepassage MRC.",
"However, more and more latest studies start to design specific methods that can read multiple passages more effectively.",
"In the aspect of passage selection, Wang et al.",
"(2017a) introduced a pipelined approach that rank the passages first and then read the selected passages for answering questions.",
"Tan et al.",
"(2017) treats the passage ranking as an auxiliary task that can be trained jointly with the reading comprehension model.",
"Actually, the target of our answer verification is very similar to that of the passage selection, while we pay more attention to the answer content and the answer verification process.",
"Speaking of the answer verification, Wang et al.",
"(2017b) has a similar motivation to ours.",
"They attempt to aggregate the evidence from different passages and choose the final answer from n-best candidates.",
"However, they implement their idea as a separate reranking step after reading comprehension, while our answer verification is a component of the whole model that can be trained end-to-end.",
"Conclusion In this paper, we propose an end-to-end framework to tackle the multi-passage MRC task .",
"We creatively design three different modules in our model, which can find the answer boundary, model the answer content and conduct cross-passage answer verification respectively.",
"All these three modules can be trained with different forms of the answer labels and training them jointly can provide further improvement.",
"The experimental results demonstrate that our model outperforms the baseline models by a large margin and achieves the state-of-the-art performance on two challenging datasets, both of which are designed for MRC on real web data."
]
} | {
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.3",
"2.4",
"2.5",
"3",
"3.1",
"3.2",
"3.4",
"4.1",
"4.2",
"4.3",
"5",
"6"
],
"paper_header_content": [
"Introduction",
"Question and Passage Modeling",
"Answer Boundary Prediction",
"Answer Content Modeling",
"Cross-Passage Answer Verification",
"Joint Training and Prediction",
"Experiments",
"Datasets",
"Implementation Details",
"Results on DuReader",
"Ablation Study",
"Case Study",
"Necessity of the Content Model",
"Related Work",
"Conclusion"
]
} | GEM-SciDuet-train-28#paper-1035#slide-0 | Machine Reading Comprehension MRC | Passage: ... Tesla later approached Morgan to ask for more funds to build a more powerful transmitter. When asked where all the money had gone, Tesla responded by saying that he was affected by the Panic of 1901, which he
(Morgan) had caused Morgan was shocked by the reminder of his part in the stock market ...
Passage: Question: On what did
Tesla blame for the loss of
When asked where all the money the initial money?
was affected by the Panic of 1901 Answer: Panic of 1901
* Different types: cloze test, entity extraction, span extraction, multiple-choice ... | Passage: ... Tesla later approached Morgan to ask for more funds to build a more powerful transmitter. When asked where all the money had gone, Tesla responded by saying that he was affected by the Panic of 1901, which he
(Morgan) had caused Morgan was shocked by the reminder of his part in the stock market ...
Passage: Question: On what did
Tesla blame for the loss of
When asked where all the money the initial money?
was affected by the Panic of 1901 Answer: Panic of 1901
* Different types: cloze test, entity extraction, span extraction, multiple-choice ... | [] |
GEM-SciDuet-train-28#paper-1035#slide-1 | 1035 | Multi-Passage Machine Reading Comprehension with Cross-Passage Answer Verification | Machine reading comprehension (MRC) on real web data usually requires the machine to answer a question by analyzing multiple passages retrieved by search engine. Compared with MRC on a single passage, multi-passage MRC is more challenging, since we are likely to get multiple confusing answer candidates from different passages. To address this problem, we propose an end-to-end neural model that enables those answer candidates from different passages to verify each other based on their content representations. Specifically, we jointly train three modules that can predict the final answer based on three factors: the answer boundary, the answer content and the cross-passage answer verification. The experimental results show that our method outperforms the baseline by a large margin and achieves the state-of-the-art performance on the English MS-MARCO dataset and the Chinese DuReader dataset, both of which are designed for MRC in real-world settings. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231
],
"paper_content_text": [
"Introduction Machine reading comprehension (MRC), empowering computers with the ability to acquire knowledge and answer questions from textual data, is believed to be a crucial step in building a general intelligent agent (Chen et al., 2016) .",
"Recent years have seen rapid growth in the MRC community.",
"With the release of various datasets, the MRC task has evolved from the early cloze-style test (Hermann et al., 2015; Hill et al., 2015) to answer extraction from a single passage (Rajpurkar et al., 2016) and to the latest more complex question answering on web data (Nguyen et al., 2016; Dunn et al., 2017; He et al., 2017) .",
"Great efforts have also been made to develop models for these MRC tasks , especially for the answer extraction on single passage (Wang and Jiang, 2016; Seo et al., 2016; Pan et al., 2017) .",
"A significant milestone is that several MRC models have exceeded the performance of human annotators on the SQuAD dataset 1 (Rajpurkar et al., 2016 ).",
"However, this success on single Wikipedia passage is still not adequate, considering the ultimate goal of reading the whole web.",
"Therefore, several latest datasets (Nguyen et al., 2016; He et al., 2017; Dunn et al., 2017) attempt to design the MRC tasks in more realistic settings by involving search engines.",
"For each question, they use the search engine to retrieve multiple passages and the MRC models are required to read these passages in order to give the final answer.",
"One of the intrinsic challenges for such multipassage MRC is that since all the passages are question-related but usually independently written, it's probable that multiple confusing answer candidates (correct or incorrect) exist.",
"Table 1 shows an example from MS-MARCO.",
"We can see that all the answer candidates have semantic matching with the question while they are literally different and some of them are even incorrect.",
"As is shown by Jia and Liang (2017) , these confusing answer candidates could be quite difficult for MRC models to distinguish.",
"Therefore, special consideration is required for such multi-passage MRC problem.",
"In this paper, we propose to leverage the answer candidates from different passages to verify the final correct answer and rule out the noisy incorrect answers.",
"Our hypothesis is that the cor-Question: What is the difference between a mixed and pure culture?",
"Passages: [1] A culture is a society's total way of living and a society is a group that live in a defined territory and participate in common culture.",
"While the answer given is in essence true, societies originally form for the express purpose to enhance .",
".",
".",
"[2] .",
".",
".",
"There has been resurgence in the economic system known as capitalism during the past two decades.",
"4.",
"The mixed economy is a balance between socialism and capitalism.",
"As a result, some institutions are owned and maintained by .",
".",
".",
"[3] A pure culture is one in which only one kind of microbial species is found whereas in mixed culture two or more microbial species formed colonies.",
"Culture on the other hand, is the lifestyle that the people in the country .",
".",
".",
"[4] Best Answer: A pure culture comprises a single species or strains.",
"A mixed culture is taken from a source and may contain multiple strains or species.",
"A contaminated culture contains organisms that derived from some place .",
".",
".",
"[5] .",
".",
".",
"It will be at that time when we can truly obtain a pure culture.",
"A pure culture is a culture consisting of only one strain.",
"You can obtain a pure culture by picking out a small portion of the mixed culture .",
".",
".",
"[6] A pure culture is one in which only one kind of microbial species is found whereas in mixed culture two or more microbial species formed colonies.",
"A pure culture is a culture consisting of only one strain.",
".",
".",
".",
"· · · · · · Reference Answer: A pure culture is one in which only one kind of microbial species is found whereas in mixed culture two or more microbial species formed colonies.",
"rect answers could occur more frequently in those passages and usually share some commonalities, while incorrect answers are usually different from one another.",
"The example in Table 1 demonstrates this phenomenon.",
"We can see that the answer candidates extracted from the last four passages are all valid answers to the question and they are semantically similar to each other, while the answer candidates from the other two passages are incorrect and there is no supportive information from other passages.",
"As human beings usually compare the answer candidates from different sources to deduce the final answer, we hope that MRC model can also benefit from the cross-passage answer verification process.",
"The overall framework of our model is demonstrated in Figure 1 , which consists of three modules.",
"First, we follow the boundary-based MRC models (Seo et al., 2016; Wang and Jiang, 2016) to find an answer candidate for each passage by identifying the start and end position of the answer ( Figure 2) .",
"Second, we model the meanings of the answer candidates extracted from those passages and use the content scores to measure the quality of the candidates from a second perspective.",
"Third, we conduct the answer verification by enabling each answer candidate to attend to the other candidates based on their representations.",
"We hope that the answer candidates can collect supportive information from each other according to their semantic similarities and further decide whether each candidate is correct or not.",
"Therefore, the final answer is determined by three factors: the boundary, the content and the crosspassage answer verification.",
"The three steps are modeled using different modules, which can be jointly trained in our end-to-end framework.",
"We conduct extensive experiments on the MS-MARCO (Nguyen et al., 2016) and DuReader (He et al., 2017) datasets.",
"The results show that our answer verification MRC model outperforms the baseline models by a large margin and achieves the state-of-the-art performance on both datasets.",
"Figure 1 gives an overview of our multi-passage MRC model which is mainly composed of three modules including answer boundary prediction, answer content modeling and answer verification.",
"First of all, we need to model the question and passages.",
"Following Seo et al.",
"(2016) , we compute the question-aware representation for each passage (Section 2.1).",
"Based on this representation, we employ a Pointer Network (Vinyals et al., 2015) to predict the start and end position of the answer in the module of answer boundary prediction (Section 2.2).",
"At the same time, with the answer content model (Section 2.3), we estimate whether each word should be included in the answer and thus obtain the answer representations.",
"Next, in the answer verification module (Section 2.4), each answer candidate can attend to the other answer candidates to collect supportive information and we compute one score for each candidate Figure 1 : Overview of our method for multi-passage machine reading comprehension to indicate whether it is correct or not according to the verification.",
"The final answer is determined by not only the boundary but also the answer content and its verification score (Section 2.5).",
"Our Approach Question and Passage Modeling Given a question Q and a set of passages {P i } retrieved by search engines, our task is to find the best concise answer to the question.",
"First, we formally present the details of modeling the question and passages.",
"Encoding We first map each word into the vector space by concatenating its word embedding and sum of its character embeddings.",
"Then we employ bi-directional LSTMs (BiLSTM) to encode the question Q and passages {P i } as follows: u Q t = BiLSTM Q (u Q t−1 , [e Q Q-P Matching One essential step in MRC is to match the question with passages so that important information can be highlighted.",
"We use the Attention Flow Layer (Seo et al., 2016) to conduct the Q-P matching in two directions.",
"The similarity matrix S ∈ R |Q|×|P i | between the question and passage i is changed to a simpler version, where the similarity between the t th word in the question and the k th word in passage i is computed as: S t,k = u Q t · u P i k (3) Then the context-to-question attention and question-to-context attention is applied strictly following Seo et al.",
"(2016) to obtain the questionaware passage representation {ũ P i t }.",
"We do not give the details here due to space limitation.",
"Next, another BiLSTM is applied in order to fuse the contextual information and get the new representation for each word in the passage, which is regarded as the match output: v P i t = BiLSTM M (v P i t−1 ,ũ P i t ) (4) Based on the passage representations, we introduce the three main modules of our model.",
"Answer Boundary Prediction To extract the answer span from passages, mainstream studies try to locate the boundary of the answer, which is called boundary model.",
"Following (Wang and Jiang, 2016) , we employ Pointer Network (Vinyals et al., 2015) to compute the probability of each word to be the start or end position of the span: g t k = w a 1 tanh(W a 2 [v P k , h a t−1 ]) (5) α t k = exp(g t k )/ |P| j=1 exp(g t j ) (6) c t = |P| k=1 α t k v P k (7) h a t = LSTM(h a t−1 , c t ) (8) By utilizing the attention weights, the probability of the k th word in the passage to be the start and end position of the answer is obtained as α 1 k and α 2 k .",
"It should be noted that the pointer network is applied to the concatenation of all passages, which is denoted as P so that the probabilities are comparable across passages.",
"This boundary model can be trained by minimizing the negative log probabilities of the true start and end indices: L boundary = − 1 N N i=1 (log α 1 y 1 i + log α 2 y 2 i ) (9) where N is the number of samples in the dataset and y 1 i , y 2 i are the gold start and end positions.",
"Answer Content Modeling Previous work employs the boundary model to find the text span with the maximum boundary score as the final answer.",
"However, in our context, besides locating the answer candidates, we also need to model their meanings in order to conduct the verification.",
"An intuitive method is to compute the representation of the answer candidates separately after extracting them, but it could be hard to train such model end-to-end.",
"Here, we propose a novel method that can obtain the representation of the answer candidates based on probabilities.",
"Specifically, we change the output layer of the classic MRC model.",
"Besides predicting the boundary probabilities for the words in the passages, we also predict whether each word should be included in the content of the answer.",
"The content probability of the k th word is computed as: p c k = sigmoid(w c 1 ReLU(W c 2 v P i k )) (10) Training this content model is also quite intuitive.",
"We transform the boundary labels into a continuous segment, which means the words within the answer span will be labeled as 1 and other words will be labeled as 0.",
"In this way, we define the loss function as the averaged cross entropy: L content = − 1 N 1 |P| N i=1 |P | j=1 [y c k log p c k + (1 − y c k ) log(1 − p c k )] (11) The content probabilities provide another view to measure the quality of the answer in addition to the boundary.",
"Moreover, with these probabilities, we can represent the answer from passage i as a weighted sum of all the word embeddings in this passage: r A i = 1 |P i | |P i | k=1 p c k [e P i k , c P i k ] (12) Cross-Passage Answer Verification The boundary model and the content model focus on extracting and modeling the answer within a single passage respectively, with little consideration of the cross-passage information.",
"However, as is discussed in Section 1, there could be multiple answer candidates from different passages and some of them may mislead the MRC model to make an incorrect prediction.",
"It's necessary to aggregate the information from different passages and choose the best one from those candidates.",
"Therefore, we propose a method to enable the answer candidates to exchange information and verify each other through the cross-passage answer verification process.",
"Given the representation of the answer candidates from all passages {r A i }, each answer candidate then attends to other candidates to collect supportive information via attention mechanism: s i,j = 0, if i = j, r A i · r A j , otherwise (13) α i,j = exp(s i,j )/ n k=1 exp(s i,k ) (14) r A i = n j=1 α i,j r A j (15) Herer A i is the collected verification information from other passages based on the attention weights.",
"Then we pass it together with the original representation r A i to a fully connected layer: g v i = w v [r A i ,r A i , r A i r A i ] (16) We further normalize these scores over all passages to get the verification score for answer candidate A i : p v i = exp(g v i )/ n j=1 exp(g v j ) (17) In order to train this verification model, we take the answer from the gold passage as the gold answer.",
"And the loss function can be formulated as the negative log probability of the correct answer: L verif y = − 1 N N i=1 log p v y v i (18) where y v i is the index of the correct answer in all the answer candidates of the i th instance .",
"Joint Training and Prediction As is described above, we define three objectives for the reading comprehension model over multiple passages: 1. finding the boundary of the answer; 2. predicting whether each word should be included in the content; 3. selecting the best answer via cross-passage answer verification.",
"According to our design, these three tasks can share the same embedding, encoding and matching layers.",
"Therefore, we propose to train them together as multi-task learning (Ruder, 2017) .",
"The joint objective function is formulated as follows: L = L boundary + β 1 L content + β 2 L verif y (19) where β 1 and β 2 are two hyper-parameters that control the weights of those tasks.",
"When predicting the final answer, we take the boundary score, content score and verification score into consideration.",
"We first extract the answer candidate A i that has the maximum boundary score from each passage i.",
"This boundary score is computed as the product of the start and end probability of the answer span.",
"Then for each answer candidate A i , we average the content probabilities of all its words as the content score of A i .",
"And we can also predict the verification score for A i using the verification model.",
"Therefore, the final answer can be selected from all the answer candidates according to the product of these three scores.",
"Experiments To verify the effectiveness of our model on multipassage machine reading comprehension, we conduct experiments on the MS-MARCO (Nguyen et al., 2016) and DuReader (He et al., 2017) datasets.",
"Our method achieves the state-of-the-art performance on both datasets.",
"Datasets We choose the MS-MARCO and DuReader datasets to test our method, since both of them are One prerequisite for answer verification is that there should be multiple correct answers so that they can verify each other.",
"Both the MS-MARCO and DuReader datasets require the human annotators to generate multiple answers if possible.",
"Table 2 shows the proportion of questions that have multiple answers.",
"However, the same answer that occurs many times is treated as one single answer here.",
"Therefore, we also report the proportion of questions that have multiple answer spans to match with the human-generated answers.",
"A span is taken as valid if it can achieve F1 score larger than 0.7 compared with any reference answer.",
"From these statistics, we can see that the phenomenon of multiple answers is quite common for both MS-MARCO and DuReader.",
"These answers will provide strong signals for answer verification if we can leverage them properly.",
"Implementation Details For MS-MARCO, we preprocess the corpus with the reversible tokenizer from Stanford CoreNLP and we choose the span that achieves the highest ROUGE-L score with the reference answers as the gold span for training.",
"We employ the 300-D pre-trained Glove embeddings (Pennington et al., 2014) and keep it fixed during training.",
"The character embeddings are randomly initialized with its dimension as 30.",
"For DuReader, we follow the preprocessing described in He et al.",
"(2017) .",
"We tune the hyper-parameters according to the Model ROUGE-L BLEU-1 FastQA Ext (Weissenborn et al., 2017) 33.67 33.93 Prediction (Wang and Jiang, 2016) 37.33 40.72 ReasoNet (Shen et al., 2017) 38.81 39.86 R-Net (Wang et al., 2017c) 42.89 42.22 S-Net (Tan et al., 2017) 45 Two simple yet effective technologies are employed to improve the final performance on these two datasets respectively.",
"For MS-MARCO, approximately 8% questions have the answers as Yes or No, which usually cannot be solved by extractive approach (Tan et al., 2017) .",
"We address this problem by training a simple Yes/No classifier for those questions with certain patterns (e.g., starting with \"is\").",
"Concretely, we simply change the output layer of the basic boundary model so that it can predict whether the answer is \"Yes\" or \"No\".",
"For DuReader, the retrieved document usually contains a large number of paragraphs that cannot be fed into MRC models directly (He et al., 2017) .",
"The original paper employs a simple a simple heuristic strategy to select a representative paragraph for each document, while we train a paragraph ranking model for this.",
"We will demonstrate the effects of these two technologies later.",
"Table 3 shows the results of our system and other state-of-the-art models on the MS-MARCO test set.",
"We adopt the official evaluation metrics, including ROUGE-L (Lin, 2004) and BLEU-1 (Papineni et al., 2002) .",
"As we can see, for both metrics, our single model outperforms all the other competing models with an evident margin, which is extremely hard considering the near-human per- formance.",
"If we ensemble the models trained with different random seeds and hyper-parameters, the results can be further improved and outperform the ensemble model in Tan et al.",
"(2017) , especially in terms of the BLEU-1.",
"Results on MS-MARCO Results on DuReader The results of our model and several baseline systems on the test set of DuReader are shown in Table 4 .",
"The BiDAF and Match-LSTM models are provided as two baseline systems (He et al., 2017) .",
"Based on BiDAF, as is described in Section 3.2, we tried a new paragraph selection strategy by employing a paragraph ranking (PR) model.",
"We can see that this paragraph ranking can boost the BiDAF baseline significantly.",
"Finally, we implement our system based on this new strategy, and our system (single model) achieves further improvement by a large margin.",
"Question: What is the difference between a mixed and pure culture Scores Answer Candidates: Boundary Content Verification [1] A culture is a society's total way of living and a society is a group .",
".",
".",
"1.0 × 10 −2 1.0 × 10 −1 1.1 × 10 −1 [2] The mixed economy is a balance between socialism and capitalism.",
"1.0 × 10 −4 4.0 × 10 −2 3.2 × 10 −2 [3] A pure culture is one in which only one kind of microbial species is .",
".",
".",
"5.5 × 10 −3 7.7 × 10 −2 1.2 × 10 −1 [4] A pure culture comprises a single species or strains.",
"A mixed .",
".",
".",
"2.7 × 10 −3 8.1 × 10 −2 1.3 × 10 −1 [5] A pure culture is a culture consisting of only one strain.",
"5.8 × 10 −4 7.9 × 10 −2 5.1 × 10 −2 [6] A pure culture is one in which only one kind of microbial species .",
".",
".",
"5.8 × 10 −3 9.1 × 10 −2 2.7 × 10 −1 .",
".",
".",
".",
".",
".",
".",
".",
".",
".",
".",
".",
"Analysis and Discussion Ablation Study To get better insight into our system, we conduct in-depth ablation study on the development set of MS-MARCO, which is shown in Table 5 .",
"Following Tan et al.",
"(2017) , we mainly focus on the ROUGE-L score that is averaged case by case.",
"We first evaluate the answer verification by ablating the cross-passage verification model so that the verification loss and verification score will not be used during training and testing.",
"Then we remove the content model in order to test the necessity of modeling the content of the answer.",
"Since we don't have the content scores, we use the boundary probabilities instead to compute the answer representation for verification.",
"Next, to show the benefits of joint training, we train the boundary model separately from the other two models.",
"Finally, we remove the yes/no classification in order to show the real improvement of our end-toend model compared with the baseline method that predicts the answer with only the boundary model.",
"From Table 5 , we can see that the answer verification makes a great contribution to the overall improvement, which confirms our hypothesis that cross-passage answer verification is useful for the multi-passage MRC.",
"For the ablation of the content model, we analyze that it will not only affect the content score itself, but also violate the verification model since the content probabilities are necessary for the answer representation, which will be further analyzed in Section 4.3.",
"Another discovery is that jointly training the three models can provide great benefits, which shows that the three tasks are actually closely related and can boost each other with shared representations at bottom layers.",
"At last, comparing our method with the baseline, we achieve an improvement of nearly 3 points without the yes/no classification.",
"This significant improvement proves the effectiveness of our approach.",
"Case Study To demonstrate how each module of our model takes effect when predicting the final answer, we conduct a case study in Table 6 with the same example that we discussed in Section 1.",
"For each answer candidate, we list three scores predicted by the boundary model, content model and verification model respectively.",
"On the one hand, we can see that these three scores generally have some relevance.",
"For example, the second candidate is given lowest scores by all the three models.",
"We analyze that this is because the models share the same encoding and matching layers at bottom level and this relevance guarantees that the content and verification models will not violate the boundary model too much.",
"On the other hand, we also see that the verification score can really make a difference here when the boundary model makes an incorrect decision among the confusing answer candidates ([1], [3], [4], [6] ).",
"Besides, as we expected, the verification model tends to give higher scores for those answers that have semantic commonality with each other ([3] , [4], [6] ), which are all valid answers in this case.",
"By multiplying the three scores, our model finally predicts the answer correctly.",
"Necessity of the Content Model In our model, we compute the answer representation based on the content probabilities predicted by a separate content model instead of directly using the boundary probabilities.",
"We argue that this content model is necessary for our answer verification process.",
"Figure 2 plots the predicted content probabilities as well as the boundary probabilities The noun charge unit has 1 sense : 1 .",
"a measure of the quantity of electricity -LRB-determined by the amount of an electric current and the time for which it flows -RRB-.",
"familiarity info : charge unit used as a noun is very rare .",
"start probability end probability content probability Figure 2 : The boundary probabilities and content probabilities for the words in a passage for a passage.",
"We can see that the boundary and content probabilities capture different aspects of the answer.",
"Since answer candidates usually have similar boundary words, if we compute the answer representation based on the boundary probabilities, it's difficult to model the real difference among different answer candidates.",
"On the contrary, with the content probabilities, we pay more attention to the content part of the answer, which can provide more distinguishable information for verifying the correct answer.",
"Furthermore, the content probabilities can also adjust the weights of the words within the answer span so that unimportant words (e.g.",
"\"and\" and \".\")",
"get lower weights in the final answer representation.",
"We believe that this refined representation is also good for the answer verification process.",
"Related Work Machine reading comprehension made rapid progress in recent years, especially for singlepassage MRC task, such as SQuAD (Rajpurkar et al., 2016) .",
"Mainstream studies (Seo et al., 2016; Wang and Jiang, 2016; Xiong et al., 2016) treat reading comprehension as extracting answer span from the given passage, which is usually achieved by predicting the start and end position of the answer.",
"We implement our boundary model similarly by employing the boundary-based pointer network (Wang and Jiang, 2016) .",
"Another inspiring work is from Wang et al.",
"(2017c) , where the authors propose to match the passage against itself so that the representation can aggregate evidence from the whole passage.",
"Our verification model adopts a similar idea.",
"However, we collect information across passages and our attention is based on the answer representation, which is much more efficient than attention over all passages.",
"For the model training, Xiong et al.",
"(2017) argues that the boundary loss encourages exact answers at the cost of penalizing overlapping answers.",
"Therefore they propose a mixed objective that incorporates rewards derived from word overlap.",
"Our joint training approach has a similar function.",
"By taking the content and verification loss into consideration, our model will give less loss for overlapping answers than those unmatched answers, and our loss function is totally differentiable.",
"Recently, we also see emerging interests in multi-passage MRC from both the academic (Dunn et al., 2017; Joshi et al., 2017) and industrial community (Nguyen et al., 2016; He et al., 2017) .",
"Early studies (Shen et al., 2017; Wang et al., 2017c) usually concat those passages and employ the same models designed for singlepassage MRC.",
"However, more and more latest studies start to design specific methods that can read multiple passages more effectively.",
"In the aspect of passage selection, Wang et al.",
"(2017a) introduced a pipelined approach that rank the passages first and then read the selected passages for answering questions.",
"Tan et al.",
"(2017) treats the passage ranking as an auxiliary task that can be trained jointly with the reading comprehension model.",
"Actually, the target of our answer verification is very similar to that of the passage selection, while we pay more attention to the answer content and the answer verification process.",
"Speaking of the answer verification, Wang et al.",
"(2017b) has a similar motivation to ours.",
"They attempt to aggregate the evidence from different passages and choose the final answer from n-best candidates.",
"However, they implement their idea as a separate reranking step after reading comprehension, while our answer verification is a component of the whole model that can be trained end-to-end.",
"Conclusion In this paper, we propose an end-to-end framework to tackle the multi-passage MRC task .",
"We creatively design three different modules in our model, which can find the answer boundary, model the answer content and conduct cross-passage answer verification respectively.",
"All these three modules can be trained with different forms of the answer labels and training them jointly can provide further improvement.",
"The experimental results demonstrate that our model outperforms the baseline models by a large margin and achieves the state-of-the-art performance on two challenging datasets, both of which are designed for MRC on real web data."
]
} | {
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.3",
"2.4",
"2.5",
"3",
"3.1",
"3.2",
"3.4",
"4.1",
"4.2",
"4.3",
"5",
"6"
],
"paper_header_content": [
"Introduction",
"Question and Passage Modeling",
"Answer Boundary Prediction",
"Answer Content Modeling",
"Cross-Passage Answer Verification",
"Joint Training and Prediction",
"Experiments",
"Datasets",
"Implementation Details",
"Results on DuReader",
"Ablation Study",
"Case Study",
"Necessity of the Content Model",
"Related Work",
"Conclusion"
]
} | GEM-SciDuet-train-28#paper-1035#slide-1 | Applying MRC to the Web | how many teams will be in the 2022 world cup? u Q
2022 FIFA World Cup - Wikipedia
In the end, there were five bids for the 2022 FIFA World Cup: Australia, Japan, Qatar, South Korea and e All of t h em seem rel evant.
Teams: 32 (from 5 or 6 confederations) Dates: 21 November 18 December
om 5 or 6 confederations) Dates: 21 November - 18 December country: Qatar Venue(s): 8 or 12 (in 5 or 8 host cities)
2018 FIFA World Cup qualification - Wikipedia
https://en.wikipedia.org/wiki/2018_FIFA_World_Cup_qualification v United Arab Emirates Ahmed Khalil (16 goals each). 2014 - 2022 . The 2018 FIFA World Cup qualification process was a series of tournaments organised by the six FIFA confederations to decide 31 of the 32 teams which will play in the 2018 FIFA World Cup, with .... the suspension of their football association by FIFA on 30 May 2015. Teams: 210 (from 6 confederations) Goals scored: 2,454 (2.81 per match) Matches played: 872
Qataris considering a 48-team option for 2022 World Cup - The ... https:/Avww.washingtonpost.com/...team...2022-world-cup/.../64dae0e6-8214-118-b3b... 5 days ago - The organizers of the 2022 World Cup in Qatar are open to talks about a ... power it apparently gained is key to any progress on the tournament expansion. ... 32 nations from which 16 winners would join 16 seeded teams in a ...
* Search engine is employed.
Multiple passages are retrieved.
|. WIKIDE dia. OFg/WIKI/ ZU A World_Cup_qualification v United Arab Emirates Ahmed Khalil (16 goals cock). 2014 - 2022 . The 2018 FIFA World Cup
2022 FIFA World Cup - Wiki Wikipedia
https://en. wikipedia. grgauikil202 Nay
But they give different answers!
5 days ago - The organizers o of the 2022 World cup i in Qatar are open to talks about a ... power it d agsess on the tournament expansion. ... 32 nations from which 16
2022 FIFA World Cup ~ Wiki es
https: //en. wikipedia. orgauilis20
In the end, there werd re is for the FA World Cup} Australia, Japan, Qatar, South Korea and
All of them seem relevant.
Much more misleading candidates | how many teams will be in the 2022 world cup? u Q
2022 FIFA World Cup - Wikipedia
In the end, there were five bids for the 2022 FIFA World Cup: Australia, Japan, Qatar, South Korea and e All of t h em seem rel evant.
Teams: 32 (from 5 or 6 confederations) Dates: 21 November 18 December
om 5 or 6 confederations) Dates: 21 November - 18 December country: Qatar Venue(s): 8 or 12 (in 5 or 8 host cities)
2018 FIFA World Cup qualification - Wikipedia
https://en.wikipedia.org/wiki/2018_FIFA_World_Cup_qualification v United Arab Emirates Ahmed Khalil (16 goals each). 2014 - 2022 . The 2018 FIFA World Cup qualification process was a series of tournaments organised by the six FIFA confederations to decide 31 of the 32 teams which will play in the 2018 FIFA World Cup, with .... the suspension of their football association by FIFA on 30 May 2015. Teams: 210 (from 6 confederations) Goals scored: 2,454 (2.81 per match) Matches played: 872
Qataris considering a 48-team option for 2022 World Cup - The ... https:/Avww.washingtonpost.com/...team...2022-world-cup/.../64dae0e6-8214-118-b3b... 5 days ago - The organizers of the 2022 World Cup in Qatar are open to talks about a ... power it apparently gained is key to any progress on the tournament expansion. ... 32 nations from which 16 winners would join 16 seeded teams in a ...
* Search engine is employed.
Multiple passages are retrieved.
|. WIKIDE dia. OFg/WIKI/ ZU A World_Cup_qualification v United Arab Emirates Ahmed Khalil (16 goals cock). 2014 - 2022 . The 2018 FIFA World Cup
2022 FIFA World Cup - Wiki Wikipedia
https://en. wikipedia. grgauikil202 Nay
But they give different answers!
5 days ago - The organizers o of the 2022 World cup i in Qatar are open to talks about a ... power it d agsess on the tournament expansion. ... 32 nations from which 16
2022 FIFA World Cup ~ Wiki es
https: //en. wikipedia. orgauilis20
In the end, there werd re is for the FA World Cup} Australia, Japan, Qatar, South Korea and
All of them seem relevant.
Much more misleading candidates | [] |
GEM-SciDuet-train-28#paper-1035#slide-2 | 1035 | Multi-Passage Machine Reading Comprehension with Cross-Passage Answer Verification | Machine reading comprehension (MRC) on real web data usually requires the machine to answer a question by analyzing multiple passages retrieved by search engine. Compared with MRC on a single passage, multi-passage MRC is more challenging, since we are likely to get multiple confusing answer candidates from different passages. To address this problem, we propose an end-to-end neural model that enables those answer candidates from different passages to verify each other based on their content representations. Specifically, we jointly train three modules that can predict the final answer based on three factors: the answer boundary, the answer content and the cross-passage answer verification. The experimental results show that our method outperforms the baseline by a large margin and achieves the state-of-the-art performance on the English MS-MARCO dataset and the Chinese DuReader dataset, both of which are designed for MRC in real-world settings. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231
],
"paper_content_text": [
"Introduction Machine reading comprehension (MRC), empowering computers with the ability to acquire knowledge and answer questions from textual data, is believed to be a crucial step in building a general intelligent agent (Chen et al., 2016) .",
"Recent years have seen rapid growth in the MRC community.",
"With the release of various datasets, the MRC task has evolved from the early cloze-style test (Hermann et al., 2015; Hill et al., 2015) to answer extraction from a single passage (Rajpurkar et al., 2016) and to the latest more complex question answering on web data (Nguyen et al., 2016; Dunn et al., 2017; He et al., 2017) .",
"Great efforts have also been made to develop models for these MRC tasks , especially for the answer extraction on single passage (Wang and Jiang, 2016; Seo et al., 2016; Pan et al., 2017) .",
"A significant milestone is that several MRC models have exceeded the performance of human annotators on the SQuAD dataset 1 (Rajpurkar et al., 2016 ).",
"However, this success on single Wikipedia passage is still not adequate, considering the ultimate goal of reading the whole web.",
"Therefore, several latest datasets (Nguyen et al., 2016; He et al., 2017; Dunn et al., 2017) attempt to design the MRC tasks in more realistic settings by involving search engines.",
"For each question, they use the search engine to retrieve multiple passages and the MRC models are required to read these passages in order to give the final answer.",
"One of the intrinsic challenges for such multipassage MRC is that since all the passages are question-related but usually independently written, it's probable that multiple confusing answer candidates (correct or incorrect) exist.",
"Table 1 shows an example from MS-MARCO.",
"We can see that all the answer candidates have semantic matching with the question while they are literally different and some of them are even incorrect.",
"As is shown by Jia and Liang (2017) , these confusing answer candidates could be quite difficult for MRC models to distinguish.",
"Therefore, special consideration is required for such multi-passage MRC problem.",
"In this paper, we propose to leverage the answer candidates from different passages to verify the final correct answer and rule out the noisy incorrect answers.",
"Our hypothesis is that the cor-Question: What is the difference between a mixed and pure culture?",
"Passages: [1] A culture is a society's total way of living and a society is a group that live in a defined territory and participate in common culture.",
"While the answer given is in essence true, societies originally form for the express purpose to enhance .",
".",
".",
"[2] .",
".",
".",
"There has been resurgence in the economic system known as capitalism during the past two decades.",
"4.",
"The mixed economy is a balance between socialism and capitalism.",
"As a result, some institutions are owned and maintained by .",
".",
".",
"[3] A pure culture is one in which only one kind of microbial species is found whereas in mixed culture two or more microbial species formed colonies.",
"Culture on the other hand, is the lifestyle that the people in the country .",
".",
".",
"[4] Best Answer: A pure culture comprises a single species or strains.",
"A mixed culture is taken from a source and may contain multiple strains or species.",
"A contaminated culture contains organisms that derived from some place .",
".",
".",
"[5] .",
".",
".",
"It will be at that time when we can truly obtain a pure culture.",
"A pure culture is a culture consisting of only one strain.",
"You can obtain a pure culture by picking out a small portion of the mixed culture .",
".",
".",
"[6] A pure culture is one in which only one kind of microbial species is found whereas in mixed culture two or more microbial species formed colonies.",
"A pure culture is a culture consisting of only one strain.",
".",
".",
".",
"· · · · · · Reference Answer: A pure culture is one in which only one kind of microbial species is found whereas in mixed culture two or more microbial species formed colonies.",
"rect answers could occur more frequently in those passages and usually share some commonalities, while incorrect answers are usually different from one another.",
"The example in Table 1 demonstrates this phenomenon.",
"We can see that the answer candidates extracted from the last four passages are all valid answers to the question and they are semantically similar to each other, while the answer candidates from the other two passages are incorrect and there is no supportive information from other passages.",
"As human beings usually compare the answer candidates from different sources to deduce the final answer, we hope that MRC model can also benefit from the cross-passage answer verification process.",
"The overall framework of our model is demonstrated in Figure 1 , which consists of three modules.",
"First, we follow the boundary-based MRC models (Seo et al., 2016; Wang and Jiang, 2016) to find an answer candidate for each passage by identifying the start and end position of the answer ( Figure 2) .",
"Second, we model the meanings of the answer candidates extracted from those passages and use the content scores to measure the quality of the candidates from a second perspective.",
"Third, we conduct the answer verification by enabling each answer candidate to attend to the other candidates based on their representations.",
"We hope that the answer candidates can collect supportive information from each other according to their semantic similarities and further decide whether each candidate is correct or not.",
"Therefore, the final answer is determined by three factors: the boundary, the content and the crosspassage answer verification.",
"The three steps are modeled using different modules, which can be jointly trained in our end-to-end framework.",
"We conduct extensive experiments on the MS-MARCO (Nguyen et al., 2016) and DuReader (He et al., 2017) datasets.",
"The results show that our answer verification MRC model outperforms the baseline models by a large margin and achieves the state-of-the-art performance on both datasets.",
"Figure 1 gives an overview of our multi-passage MRC model which is mainly composed of three modules including answer boundary prediction, answer content modeling and answer verification.",
"First of all, we need to model the question and passages.",
"Following Seo et al.",
"(2016) , we compute the question-aware representation for each passage (Section 2.1).",
"Based on this representation, we employ a Pointer Network (Vinyals et al., 2015) to predict the start and end position of the answer in the module of answer boundary prediction (Section 2.2).",
"At the same time, with the answer content model (Section 2.3), we estimate whether each word should be included in the answer and thus obtain the answer representations.",
"Next, in the answer verification module (Section 2.4), each answer candidate can attend to the other answer candidates to collect supportive information and we compute one score for each candidate Figure 1 : Overview of our method for multi-passage machine reading comprehension to indicate whether it is correct or not according to the verification.",
"The final answer is determined by not only the boundary but also the answer content and its verification score (Section 2.5).",
"Our Approach Question and Passage Modeling Given a question Q and a set of passages {P i } retrieved by search engines, our task is to find the best concise answer to the question.",
"First, we formally present the details of modeling the question and passages.",
"Encoding We first map each word into the vector space by concatenating its word embedding and sum of its character embeddings.",
"Then we employ bi-directional LSTMs (BiLSTM) to encode the question Q and passages {P i } as follows: u Q t = BiLSTM Q (u Q t−1 , [e Q Q-P Matching One essential step in MRC is to match the question with passages so that important information can be highlighted.",
"We use the Attention Flow Layer (Seo et al., 2016) to conduct the Q-P matching in two directions.",
"The similarity matrix S ∈ R |Q|×|P i | between the question and passage i is changed to a simpler version, where the similarity between the t th word in the question and the k th word in passage i is computed as: S t,k = u Q t · u P i k (3) Then the context-to-question attention and question-to-context attention is applied strictly following Seo et al.",
"(2016) to obtain the questionaware passage representation {ũ P i t }.",
"We do not give the details here due to space limitation.",
"Next, another BiLSTM is applied in order to fuse the contextual information and get the new representation for each word in the passage, which is regarded as the match output: v P i t = BiLSTM M (v P i t−1 ,ũ P i t ) (4) Based on the passage representations, we introduce the three main modules of our model.",
"Answer Boundary Prediction To extract the answer span from passages, mainstream studies try to locate the boundary of the answer, which is called boundary model.",
"Following (Wang and Jiang, 2016) , we employ Pointer Network (Vinyals et al., 2015) to compute the probability of each word to be the start or end position of the span: g t k = w a 1 tanh(W a 2 [v P k , h a t−1 ]) (5) α t k = exp(g t k )/ |P| j=1 exp(g t j ) (6) c t = |P| k=1 α t k v P k (7) h a t = LSTM(h a t−1 , c t ) (8) By utilizing the attention weights, the probability of the k th word in the passage to be the start and end position of the answer is obtained as α 1 k and α 2 k .",
"It should be noted that the pointer network is applied to the concatenation of all passages, which is denoted as P so that the probabilities are comparable across passages.",
"This boundary model can be trained by minimizing the negative log probabilities of the true start and end indices: L boundary = − 1 N N i=1 (log α 1 y 1 i + log α 2 y 2 i ) (9) where N is the number of samples in the dataset and y 1 i , y 2 i are the gold start and end positions.",
"Answer Content Modeling Previous work employs the boundary model to find the text span with the maximum boundary score as the final answer.",
"However, in our context, besides locating the answer candidates, we also need to model their meanings in order to conduct the verification.",
"An intuitive method is to compute the representation of the answer candidates separately after extracting them, but it could be hard to train such model end-to-end.",
"Here, we propose a novel method that can obtain the representation of the answer candidates based on probabilities.",
"Specifically, we change the output layer of the classic MRC model.",
"Besides predicting the boundary probabilities for the words in the passages, we also predict whether each word should be included in the content of the answer.",
"The content probability of the k th word is computed as: p c k = sigmoid(w c 1 ReLU(W c 2 v P i k )) (10) Training this content model is also quite intuitive.",
"We transform the boundary labels into a continuous segment, which means the words within the answer span will be labeled as 1 and other words will be labeled as 0.",
"In this way, we define the loss function as the averaged cross entropy: L content = − 1 N 1 |P| N i=1 |P | j=1 [y c k log p c k + (1 − y c k ) log(1 − p c k )] (11) The content probabilities provide another view to measure the quality of the answer in addition to the boundary.",
"Moreover, with these probabilities, we can represent the answer from passage i as a weighted sum of all the word embeddings in this passage: r A i = 1 |P i | |P i | k=1 p c k [e P i k , c P i k ] (12) Cross-Passage Answer Verification The boundary model and the content model focus on extracting and modeling the answer within a single passage respectively, with little consideration of the cross-passage information.",
"However, as is discussed in Section 1, there could be multiple answer candidates from different passages and some of them may mislead the MRC model to make an incorrect prediction.",
"It's necessary to aggregate the information from different passages and choose the best one from those candidates.",
"Therefore, we propose a method to enable the answer candidates to exchange information and verify each other through the cross-passage answer verification process.",
"Given the representation of the answer candidates from all passages {r A i }, each answer candidate then attends to other candidates to collect supportive information via attention mechanism: s i,j = 0, if i = j, r A i · r A j , otherwise (13) α i,j = exp(s i,j )/ n k=1 exp(s i,k ) (14) r A i = n j=1 α i,j r A j (15) Herer A i is the collected verification information from other passages based on the attention weights.",
"Then we pass it together with the original representation r A i to a fully connected layer: g v i = w v [r A i ,r A i , r A i r A i ] (16) We further normalize these scores over all passages to get the verification score for answer candidate A i : p v i = exp(g v i )/ n j=1 exp(g v j ) (17) In order to train this verification model, we take the answer from the gold passage as the gold answer.",
"And the loss function can be formulated as the negative log probability of the correct answer: L verif y = − 1 N N i=1 log p v y v i (18) where y v i is the index of the correct answer in all the answer candidates of the i th instance .",
"Joint Training and Prediction As is described above, we define three objectives for the reading comprehension model over multiple passages: 1. finding the boundary of the answer; 2. predicting whether each word should be included in the content; 3. selecting the best answer via cross-passage answer verification.",
"According to our design, these three tasks can share the same embedding, encoding and matching layers.",
"Therefore, we propose to train them together as multi-task learning (Ruder, 2017) .",
"The joint objective function is formulated as follows: L = L boundary + β 1 L content + β 2 L verif y (19) where β 1 and β 2 are two hyper-parameters that control the weights of those tasks.",
"When predicting the final answer, we take the boundary score, content score and verification score into consideration.",
"We first extract the answer candidate A i that has the maximum boundary score from each passage i.",
"This boundary score is computed as the product of the start and end probability of the answer span.",
"Then for each answer candidate A i , we average the content probabilities of all its words as the content score of A i .",
"And we can also predict the verification score for A i using the verification model.",
"Therefore, the final answer can be selected from all the answer candidates according to the product of these three scores.",
"Experiments To verify the effectiveness of our model on multipassage machine reading comprehension, we conduct experiments on the MS-MARCO (Nguyen et al., 2016) and DuReader (He et al., 2017) datasets.",
"Our method achieves the state-of-the-art performance on both datasets.",
"Datasets We choose the MS-MARCO and DuReader datasets to test our method, since both of them are One prerequisite for answer verification is that there should be multiple correct answers so that they can verify each other.",
"Both the MS-MARCO and DuReader datasets require the human annotators to generate multiple answers if possible.",
"Table 2 shows the proportion of questions that have multiple answers.",
"However, the same answer that occurs many times is treated as one single answer here.",
"Therefore, we also report the proportion of questions that have multiple answer spans to match with the human-generated answers.",
"A span is taken as valid if it can achieve F1 score larger than 0.7 compared with any reference answer.",
"From these statistics, we can see that the phenomenon of multiple answers is quite common for both MS-MARCO and DuReader.",
"These answers will provide strong signals for answer verification if we can leverage them properly.",
"Implementation Details For MS-MARCO, we preprocess the corpus with the reversible tokenizer from Stanford CoreNLP and we choose the span that achieves the highest ROUGE-L score with the reference answers as the gold span for training.",
"We employ the 300-D pre-trained Glove embeddings (Pennington et al., 2014) and keep it fixed during training.",
"The character embeddings are randomly initialized with its dimension as 30.",
"For DuReader, we follow the preprocessing described in He et al.",
"(2017) .",
"We tune the hyper-parameters according to the Model ROUGE-L BLEU-1 FastQA Ext (Weissenborn et al., 2017) 33.67 33.93 Prediction (Wang and Jiang, 2016) 37.33 40.72 ReasoNet (Shen et al., 2017) 38.81 39.86 R-Net (Wang et al., 2017c) 42.89 42.22 S-Net (Tan et al., 2017) 45 Two simple yet effective technologies are employed to improve the final performance on these two datasets respectively.",
"For MS-MARCO, approximately 8% questions have the answers as Yes or No, which usually cannot be solved by extractive approach (Tan et al., 2017) .",
"We address this problem by training a simple Yes/No classifier for those questions with certain patterns (e.g., starting with \"is\").",
"Concretely, we simply change the output layer of the basic boundary model so that it can predict whether the answer is \"Yes\" or \"No\".",
"For DuReader, the retrieved document usually contains a large number of paragraphs that cannot be fed into MRC models directly (He et al., 2017) .",
"The original paper employs a simple a simple heuristic strategy to select a representative paragraph for each document, while we train a paragraph ranking model for this.",
"We will demonstrate the effects of these two technologies later.",
"Table 3 shows the results of our system and other state-of-the-art models on the MS-MARCO test set.",
"We adopt the official evaluation metrics, including ROUGE-L (Lin, 2004) and BLEU-1 (Papineni et al., 2002) .",
"As we can see, for both metrics, our single model outperforms all the other competing models with an evident margin, which is extremely hard considering the near-human per- formance.",
"If we ensemble the models trained with different random seeds and hyper-parameters, the results can be further improved and outperform the ensemble model in Tan et al.",
"(2017) , especially in terms of the BLEU-1.",
"Results on MS-MARCO Results on DuReader The results of our model and several baseline systems on the test set of DuReader are shown in Table 4 .",
"The BiDAF and Match-LSTM models are provided as two baseline systems (He et al., 2017) .",
"Based on BiDAF, as is described in Section 3.2, we tried a new paragraph selection strategy by employing a paragraph ranking (PR) model.",
"We can see that this paragraph ranking can boost the BiDAF baseline significantly.",
"Finally, we implement our system based on this new strategy, and our system (single model) achieves further improvement by a large margin.",
"Question: What is the difference between a mixed and pure culture Scores Answer Candidates: Boundary Content Verification [1] A culture is a society's total way of living and a society is a group .",
".",
".",
"1.0 × 10 −2 1.0 × 10 −1 1.1 × 10 −1 [2] The mixed economy is a balance between socialism and capitalism.",
"1.0 × 10 −4 4.0 × 10 −2 3.2 × 10 −2 [3] A pure culture is one in which only one kind of microbial species is .",
".",
".",
"5.5 × 10 −3 7.7 × 10 −2 1.2 × 10 −1 [4] A pure culture comprises a single species or strains.",
"A mixed .",
".",
".",
"2.7 × 10 −3 8.1 × 10 −2 1.3 × 10 −1 [5] A pure culture is a culture consisting of only one strain.",
"5.8 × 10 −4 7.9 × 10 −2 5.1 × 10 −2 [6] A pure culture is one in which only one kind of microbial species .",
".",
".",
"5.8 × 10 −3 9.1 × 10 −2 2.7 × 10 −1 .",
".",
".",
".",
".",
".",
".",
".",
".",
".",
".",
".",
"Analysis and Discussion Ablation Study To get better insight into our system, we conduct in-depth ablation study on the development set of MS-MARCO, which is shown in Table 5 .",
"Following Tan et al.",
"(2017) , we mainly focus on the ROUGE-L score that is averaged case by case.",
"We first evaluate the answer verification by ablating the cross-passage verification model so that the verification loss and verification score will not be used during training and testing.",
"Then we remove the content model in order to test the necessity of modeling the content of the answer.",
"Since we don't have the content scores, we use the boundary probabilities instead to compute the answer representation for verification.",
"Next, to show the benefits of joint training, we train the boundary model separately from the other two models.",
"Finally, we remove the yes/no classification in order to show the real improvement of our end-toend model compared with the baseline method that predicts the answer with only the boundary model.",
"From Table 5 , we can see that the answer verification makes a great contribution to the overall improvement, which confirms our hypothesis that cross-passage answer verification is useful for the multi-passage MRC.",
"For the ablation of the content model, we analyze that it will not only affect the content score itself, but also violate the verification model since the content probabilities are necessary for the answer representation, which will be further analyzed in Section 4.3.",
"Another discovery is that jointly training the three models can provide great benefits, which shows that the three tasks are actually closely related and can boost each other with shared representations at bottom layers.",
"At last, comparing our method with the baseline, we achieve an improvement of nearly 3 points without the yes/no classification.",
"This significant improvement proves the effectiveness of our approach.",
"Case Study To demonstrate how each module of our model takes effect when predicting the final answer, we conduct a case study in Table 6 with the same example that we discussed in Section 1.",
"For each answer candidate, we list three scores predicted by the boundary model, content model and verification model respectively.",
"On the one hand, we can see that these three scores generally have some relevance.",
"For example, the second candidate is given lowest scores by all the three models.",
"We analyze that this is because the models share the same encoding and matching layers at bottom level and this relevance guarantees that the content and verification models will not violate the boundary model too much.",
"On the other hand, we also see that the verification score can really make a difference here when the boundary model makes an incorrect decision among the confusing answer candidates ([1], [3], [4], [6] ).",
"Besides, as we expected, the verification model tends to give higher scores for those answers that have semantic commonality with each other ([3] , [4], [6] ), which are all valid answers in this case.",
"By multiplying the three scores, our model finally predicts the answer correctly.",
"Necessity of the Content Model In our model, we compute the answer representation based on the content probabilities predicted by a separate content model instead of directly using the boundary probabilities.",
"We argue that this content model is necessary for our answer verification process.",
"Figure 2 plots the predicted content probabilities as well as the boundary probabilities The noun charge unit has 1 sense : 1 .",
"a measure of the quantity of electricity -LRB-determined by the amount of an electric current and the time for which it flows -RRB-.",
"familiarity info : charge unit used as a noun is very rare .",
"start probability end probability content probability Figure 2 : The boundary probabilities and content probabilities for the words in a passage for a passage.",
"We can see that the boundary and content probabilities capture different aspects of the answer.",
"Since answer candidates usually have similar boundary words, if we compute the answer representation based on the boundary probabilities, it's difficult to model the real difference among different answer candidates.",
"On the contrary, with the content probabilities, we pay more attention to the content part of the answer, which can provide more distinguishable information for verifying the correct answer.",
"Furthermore, the content probabilities can also adjust the weights of the words within the answer span so that unimportant words (e.g.",
"\"and\" and \".\")",
"get lower weights in the final answer representation.",
"We believe that this refined representation is also good for the answer verification process.",
"Related Work Machine reading comprehension made rapid progress in recent years, especially for singlepassage MRC task, such as SQuAD (Rajpurkar et al., 2016) .",
"Mainstream studies (Seo et al., 2016; Wang and Jiang, 2016; Xiong et al., 2016) treat reading comprehension as extracting answer span from the given passage, which is usually achieved by predicting the start and end position of the answer.",
"We implement our boundary model similarly by employing the boundary-based pointer network (Wang and Jiang, 2016) .",
"Another inspiring work is from Wang et al.",
"(2017c) , where the authors propose to match the passage against itself so that the representation can aggregate evidence from the whole passage.",
"Our verification model adopts a similar idea.",
"However, we collect information across passages and our attention is based on the answer representation, which is much more efficient than attention over all passages.",
"For the model training, Xiong et al.",
"(2017) argues that the boundary loss encourages exact answers at the cost of penalizing overlapping answers.",
"Therefore they propose a mixed objective that incorporates rewards derived from word overlap.",
"Our joint training approach has a similar function.",
"By taking the content and verification loss into consideration, our model will give less loss for overlapping answers than those unmatched answers, and our loss function is totally differentiable.",
"Recently, we also see emerging interests in multi-passage MRC from both the academic (Dunn et al., 2017; Joshi et al., 2017) and industrial community (Nguyen et al., 2016; He et al., 2017) .",
"Early studies (Shen et al., 2017; Wang et al., 2017c) usually concat those passages and employ the same models designed for singlepassage MRC.",
"However, more and more latest studies start to design specific methods that can read multiple passages more effectively.",
"In the aspect of passage selection, Wang et al.",
"(2017a) introduced a pipelined approach that rank the passages first and then read the selected passages for answering questions.",
"Tan et al.",
"(2017) treats the passage ranking as an auxiliary task that can be trained jointly with the reading comprehension model.",
"Actually, the target of our answer verification is very similar to that of the passage selection, while we pay more attention to the answer content and the answer verification process.",
"Speaking of the answer verification, Wang et al.",
"(2017b) has a similar motivation to ours.",
"They attempt to aggregate the evidence from different passages and choose the final answer from n-best candidates.",
"However, they implement their idea as a separate reranking step after reading comprehension, while our answer verification is a component of the whole model that can be trained end-to-end.",
"Conclusion In this paper, we propose an end-to-end framework to tackle the multi-passage MRC task .",
"We creatively design three different modules in our model, which can find the answer boundary, model the answer content and conduct cross-passage answer verification respectively.",
"All these three modules can be trained with different forms of the answer labels and training them jointly can provide further improvement.",
"The experimental results demonstrate that our model outperforms the baseline models by a large margin and achieves the state-of-the-art performance on two challenging datasets, both of which are designed for MRC on real web data."
]
} | {
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.3",
"2.4",
"2.5",
"3",
"3.1",
"3.2",
"3.4",
"4.1",
"4.2",
"4.3",
"5",
"6"
],
"paper_header_content": [
"Introduction",
"Question and Passage Modeling",
"Answer Boundary Prediction",
"Answer Content Modeling",
"Cross-Passage Answer Verification",
"Joint Training and Prediction",
"Experiments",
"Datasets",
"Implementation Details",
"Results on DuReader",
"Ablation Study",
"Case Study",
"Necessity of the Content Model",
"Related Work",
"Conclusion"
]
} | GEM-SciDuet-train-28#paper-1035#slide-2 | An Example from MS MARCO Dataset | Question: What is the difference between a mixed and pure culture?
1) Acculture is a societys total way of living and a society is a group that live in a defined territory and participate in common culture. While the answer given is...
2) ...The mixed economy is a balance between socialism and capitalism. As a result, some institutions are owned and maintained by...
6) A pure culture is one in which only one kind of microbial species is found whereas Vv C A in mixed culture two or more microbial species formed colonies. A pure culture... | orrect Answer
4) ...A pure culture comprises a single species or strains. A mixed culture is taken from a source and may contain multiple strains or species. A contaminated ... Verify
5) ... It will be at that time when we can truly obtain a pure culture. A pure culture is a culture consisting of only one strain. You can obtain a pure culture by picking...
@ Incorrect = Partially Correct = Correct
> Similar or same | Question: What is the difference between a mixed and pure culture?
1) Acculture is a societys total way of living and a society is a group that live in a defined territory and participate in common culture. While the answer given is...
2) ...The mixed economy is a balance between socialism and capitalism. As a result, some institutions are owned and maintained by...
6) A pure culture is one in which only one kind of microbial species is found whereas Vv C A in mixed culture two or more microbial species formed colonies. A pure culture... | orrect Answer
4) ...A pure culture comprises a single species or strains. A mixed culture is taken from a source and may contain multiple strains or species. A contaminated ... Verify
5) ... It will be at that time when we can truly obtain a pure culture. A pure culture is a culture consisting of only one strain. You can obtain a pure culture by picking...
@ Incorrect = Partially Correct = Correct
> Similar or same | [] |
GEM-SciDuet-train-28#paper-1035#slide-3 | 1035 | Multi-Passage Machine Reading Comprehension with Cross-Passage Answer Verification | Machine reading comprehension (MRC) on real web data usually requires the machine to answer a question by analyzing multiple passages retrieved by search engine. Compared with MRC on a single passage, multi-passage MRC is more challenging, since we are likely to get multiple confusing answer candidates from different passages. To address this problem, we propose an end-to-end neural model that enables those answer candidates from different passages to verify each other based on their content representations. Specifically, we jointly train three modules that can predict the final answer based on three factors: the answer boundary, the answer content and the cross-passage answer verification. The experimental results show that our method outperforms the baseline by a large margin and achieves the state-of-the-art performance on the English MS-MARCO dataset and the Chinese DuReader dataset, both of which are designed for MRC in real-world settings. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231
],
"paper_content_text": [
"Introduction Machine reading comprehension (MRC), empowering computers with the ability to acquire knowledge and answer questions from textual data, is believed to be a crucial step in building a general intelligent agent (Chen et al., 2016) .",
"Recent years have seen rapid growth in the MRC community.",
"With the release of various datasets, the MRC task has evolved from the early cloze-style test (Hermann et al., 2015; Hill et al., 2015) to answer extraction from a single passage (Rajpurkar et al., 2016) and to the latest more complex question answering on web data (Nguyen et al., 2016; Dunn et al., 2017; He et al., 2017) .",
"Great efforts have also been made to develop models for these MRC tasks , especially for the answer extraction on single passage (Wang and Jiang, 2016; Seo et al., 2016; Pan et al., 2017) .",
"A significant milestone is that several MRC models have exceeded the performance of human annotators on the SQuAD dataset 1 (Rajpurkar et al., 2016 ).",
"However, this success on single Wikipedia passage is still not adequate, considering the ultimate goal of reading the whole web.",
"Therefore, several latest datasets (Nguyen et al., 2016; He et al., 2017; Dunn et al., 2017) attempt to design the MRC tasks in more realistic settings by involving search engines.",
"For each question, they use the search engine to retrieve multiple passages and the MRC models are required to read these passages in order to give the final answer.",
"One of the intrinsic challenges for such multipassage MRC is that since all the passages are question-related but usually independently written, it's probable that multiple confusing answer candidates (correct or incorrect) exist.",
"Table 1 shows an example from MS-MARCO.",
"We can see that all the answer candidates have semantic matching with the question while they are literally different and some of them are even incorrect.",
"As is shown by Jia and Liang (2017) , these confusing answer candidates could be quite difficult for MRC models to distinguish.",
"Therefore, special consideration is required for such multi-passage MRC problem.",
"In this paper, we propose to leverage the answer candidates from different passages to verify the final correct answer and rule out the noisy incorrect answers.",
"Our hypothesis is that the cor-Question: What is the difference between a mixed and pure culture?",
"Passages: [1] A culture is a society's total way of living and a society is a group that live in a defined territory and participate in common culture.",
"While the answer given is in essence true, societies originally form for the express purpose to enhance .",
".",
".",
"[2] .",
".",
".",
"There has been resurgence in the economic system known as capitalism during the past two decades.",
"4.",
"The mixed economy is a balance between socialism and capitalism.",
"As a result, some institutions are owned and maintained by .",
".",
".",
"[3] A pure culture is one in which only one kind of microbial species is found whereas in mixed culture two or more microbial species formed colonies.",
"Culture on the other hand, is the lifestyle that the people in the country .",
".",
".",
"[4] Best Answer: A pure culture comprises a single species or strains.",
"A mixed culture is taken from a source and may contain multiple strains or species.",
"A contaminated culture contains organisms that derived from some place .",
".",
".",
"[5] .",
".",
".",
"It will be at that time when we can truly obtain a pure culture.",
"A pure culture is a culture consisting of only one strain.",
"You can obtain a pure culture by picking out a small portion of the mixed culture .",
".",
".",
"[6] A pure culture is one in which only one kind of microbial species is found whereas in mixed culture two or more microbial species formed colonies.",
"A pure culture is a culture consisting of only one strain.",
".",
".",
".",
"· · · · · · Reference Answer: A pure culture is one in which only one kind of microbial species is found whereas in mixed culture two or more microbial species formed colonies.",
"rect answers could occur more frequently in those passages and usually share some commonalities, while incorrect answers are usually different from one another.",
"The example in Table 1 demonstrates this phenomenon.",
"We can see that the answer candidates extracted from the last four passages are all valid answers to the question and they are semantically similar to each other, while the answer candidates from the other two passages are incorrect and there is no supportive information from other passages.",
"As human beings usually compare the answer candidates from different sources to deduce the final answer, we hope that MRC model can also benefit from the cross-passage answer verification process.",
"The overall framework of our model is demonstrated in Figure 1 , which consists of three modules.",
"First, we follow the boundary-based MRC models (Seo et al., 2016; Wang and Jiang, 2016) to find an answer candidate for each passage by identifying the start and end position of the answer ( Figure 2) .",
"Second, we model the meanings of the answer candidates extracted from those passages and use the content scores to measure the quality of the candidates from a second perspective.",
"Third, we conduct the answer verification by enabling each answer candidate to attend to the other candidates based on their representations.",
"We hope that the answer candidates can collect supportive information from each other according to their semantic similarities and further decide whether each candidate is correct or not.",
"Therefore, the final answer is determined by three factors: the boundary, the content and the crosspassage answer verification.",
"The three steps are modeled using different modules, which can be jointly trained in our end-to-end framework.",
"We conduct extensive experiments on the MS-MARCO (Nguyen et al., 2016) and DuReader (He et al., 2017) datasets.",
"The results show that our answer verification MRC model outperforms the baseline models by a large margin and achieves the state-of-the-art performance on both datasets.",
"Figure 1 gives an overview of our multi-passage MRC model which is mainly composed of three modules including answer boundary prediction, answer content modeling and answer verification.",
"First of all, we need to model the question and passages.",
"Following Seo et al.",
"(2016) , we compute the question-aware representation for each passage (Section 2.1).",
"Based on this representation, we employ a Pointer Network (Vinyals et al., 2015) to predict the start and end position of the answer in the module of answer boundary prediction (Section 2.2).",
"At the same time, with the answer content model (Section 2.3), we estimate whether each word should be included in the answer and thus obtain the answer representations.",
"Next, in the answer verification module (Section 2.4), each answer candidate can attend to the other answer candidates to collect supportive information and we compute one score for each candidate Figure 1 : Overview of our method for multi-passage machine reading comprehension to indicate whether it is correct or not according to the verification.",
"The final answer is determined by not only the boundary but also the answer content and its verification score (Section 2.5).",
"Our Approach Question and Passage Modeling Given a question Q and a set of passages {P i } retrieved by search engines, our task is to find the best concise answer to the question.",
"First, we formally present the details of modeling the question and passages.",
"Encoding We first map each word into the vector space by concatenating its word embedding and sum of its character embeddings.",
"Then we employ bi-directional LSTMs (BiLSTM) to encode the question Q and passages {P i } as follows: u Q t = BiLSTM Q (u Q t−1 , [e Q Q-P Matching One essential step in MRC is to match the question with passages so that important information can be highlighted.",
"We use the Attention Flow Layer (Seo et al., 2016) to conduct the Q-P matching in two directions.",
"The similarity matrix S ∈ R |Q|×|P i | between the question and passage i is changed to a simpler version, where the similarity between the t th word in the question and the k th word in passage i is computed as: S t,k = u Q t · u P i k (3) Then the context-to-question attention and question-to-context attention is applied strictly following Seo et al.",
"(2016) to obtain the questionaware passage representation {ũ P i t }.",
"We do not give the details here due to space limitation.",
"Next, another BiLSTM is applied in order to fuse the contextual information and get the new representation for each word in the passage, which is regarded as the match output: v P i t = BiLSTM M (v P i t−1 ,ũ P i t ) (4) Based on the passage representations, we introduce the three main modules of our model.",
"Answer Boundary Prediction To extract the answer span from passages, mainstream studies try to locate the boundary of the answer, which is called boundary model.",
"Following (Wang and Jiang, 2016) , we employ Pointer Network (Vinyals et al., 2015) to compute the probability of each word to be the start or end position of the span: g t k = w a 1 tanh(W a 2 [v P k , h a t−1 ]) (5) α t k = exp(g t k )/ |P| j=1 exp(g t j ) (6) c t = |P| k=1 α t k v P k (7) h a t = LSTM(h a t−1 , c t ) (8) By utilizing the attention weights, the probability of the k th word in the passage to be the start and end position of the answer is obtained as α 1 k and α 2 k .",
"It should be noted that the pointer network is applied to the concatenation of all passages, which is denoted as P so that the probabilities are comparable across passages.",
"This boundary model can be trained by minimizing the negative log probabilities of the true start and end indices: L boundary = − 1 N N i=1 (log α 1 y 1 i + log α 2 y 2 i ) (9) where N is the number of samples in the dataset and y 1 i , y 2 i are the gold start and end positions.",
"Answer Content Modeling Previous work employs the boundary model to find the text span with the maximum boundary score as the final answer.",
"However, in our context, besides locating the answer candidates, we also need to model their meanings in order to conduct the verification.",
"An intuitive method is to compute the representation of the answer candidates separately after extracting them, but it could be hard to train such model end-to-end.",
"Here, we propose a novel method that can obtain the representation of the answer candidates based on probabilities.",
"Specifically, we change the output layer of the classic MRC model.",
"Besides predicting the boundary probabilities for the words in the passages, we also predict whether each word should be included in the content of the answer.",
"The content probability of the k th word is computed as: p c k = sigmoid(w c 1 ReLU(W c 2 v P i k )) (10) Training this content model is also quite intuitive.",
"We transform the boundary labels into a continuous segment, which means the words within the answer span will be labeled as 1 and other words will be labeled as 0.",
"In this way, we define the loss function as the averaged cross entropy: L content = − 1 N 1 |P| N i=1 |P | j=1 [y c k log p c k + (1 − y c k ) log(1 − p c k )] (11) The content probabilities provide another view to measure the quality of the answer in addition to the boundary.",
"Moreover, with these probabilities, we can represent the answer from passage i as a weighted sum of all the word embeddings in this passage: r A i = 1 |P i | |P i | k=1 p c k [e P i k , c P i k ] (12) Cross-Passage Answer Verification The boundary model and the content model focus on extracting and modeling the answer within a single passage respectively, with little consideration of the cross-passage information.",
"However, as is discussed in Section 1, there could be multiple answer candidates from different passages and some of them may mislead the MRC model to make an incorrect prediction.",
"It's necessary to aggregate the information from different passages and choose the best one from those candidates.",
"Therefore, we propose a method to enable the answer candidates to exchange information and verify each other through the cross-passage answer verification process.",
"Given the representation of the answer candidates from all passages {r A i }, each answer candidate then attends to other candidates to collect supportive information via attention mechanism: s i,j = 0, if i = j, r A i · r A j , otherwise (13) α i,j = exp(s i,j )/ n k=1 exp(s i,k ) (14) r A i = n j=1 α i,j r A j (15) Herer A i is the collected verification information from other passages based on the attention weights.",
"Then we pass it together with the original representation r A i to a fully connected layer: g v i = w v [r A i ,r A i , r A i r A i ] (16) We further normalize these scores over all passages to get the verification score for answer candidate A i : p v i = exp(g v i )/ n j=1 exp(g v j ) (17) In order to train this verification model, we take the answer from the gold passage as the gold answer.",
"And the loss function can be formulated as the negative log probability of the correct answer: L verif y = − 1 N N i=1 log p v y v i (18) where y v i is the index of the correct answer in all the answer candidates of the i th instance .",
"Joint Training and Prediction As is described above, we define three objectives for the reading comprehension model over multiple passages: 1. finding the boundary of the answer; 2. predicting whether each word should be included in the content; 3. selecting the best answer via cross-passage answer verification.",
"According to our design, these three tasks can share the same embedding, encoding and matching layers.",
"Therefore, we propose to train them together as multi-task learning (Ruder, 2017) .",
"The joint objective function is formulated as follows: L = L boundary + β 1 L content + β 2 L verif y (19) where β 1 and β 2 are two hyper-parameters that control the weights of those tasks.",
"When predicting the final answer, we take the boundary score, content score and verification score into consideration.",
"We first extract the answer candidate A i that has the maximum boundary score from each passage i.",
"This boundary score is computed as the product of the start and end probability of the answer span.",
"Then for each answer candidate A i , we average the content probabilities of all its words as the content score of A i .",
"And we can also predict the verification score for A i using the verification model.",
"Therefore, the final answer can be selected from all the answer candidates according to the product of these three scores.",
"Experiments To verify the effectiveness of our model on multipassage machine reading comprehension, we conduct experiments on the MS-MARCO (Nguyen et al., 2016) and DuReader (He et al., 2017) datasets.",
"Our method achieves the state-of-the-art performance on both datasets.",
"Datasets We choose the MS-MARCO and DuReader datasets to test our method, since both of them are One prerequisite for answer verification is that there should be multiple correct answers so that they can verify each other.",
"Both the MS-MARCO and DuReader datasets require the human annotators to generate multiple answers if possible.",
"Table 2 shows the proportion of questions that have multiple answers.",
"However, the same answer that occurs many times is treated as one single answer here.",
"Therefore, we also report the proportion of questions that have multiple answer spans to match with the human-generated answers.",
"A span is taken as valid if it can achieve F1 score larger than 0.7 compared with any reference answer.",
"From these statistics, we can see that the phenomenon of multiple answers is quite common for both MS-MARCO and DuReader.",
"These answers will provide strong signals for answer verification if we can leverage them properly.",
"Implementation Details For MS-MARCO, we preprocess the corpus with the reversible tokenizer from Stanford CoreNLP and we choose the span that achieves the highest ROUGE-L score with the reference answers as the gold span for training.",
"We employ the 300-D pre-trained Glove embeddings (Pennington et al., 2014) and keep it fixed during training.",
"The character embeddings are randomly initialized with its dimension as 30.",
"For DuReader, we follow the preprocessing described in He et al.",
"(2017) .",
"We tune the hyper-parameters according to the Model ROUGE-L BLEU-1 FastQA Ext (Weissenborn et al., 2017) 33.67 33.93 Prediction (Wang and Jiang, 2016) 37.33 40.72 ReasoNet (Shen et al., 2017) 38.81 39.86 R-Net (Wang et al., 2017c) 42.89 42.22 S-Net (Tan et al., 2017) 45 Two simple yet effective technologies are employed to improve the final performance on these two datasets respectively.",
"For MS-MARCO, approximately 8% questions have the answers as Yes or No, which usually cannot be solved by extractive approach (Tan et al., 2017) .",
"We address this problem by training a simple Yes/No classifier for those questions with certain patterns (e.g., starting with \"is\").",
"Concretely, we simply change the output layer of the basic boundary model so that it can predict whether the answer is \"Yes\" or \"No\".",
"For DuReader, the retrieved document usually contains a large number of paragraphs that cannot be fed into MRC models directly (He et al., 2017) .",
"The original paper employs a simple a simple heuristic strategy to select a representative paragraph for each document, while we train a paragraph ranking model for this.",
"We will demonstrate the effects of these two technologies later.",
"Table 3 shows the results of our system and other state-of-the-art models on the MS-MARCO test set.",
"We adopt the official evaluation metrics, including ROUGE-L (Lin, 2004) and BLEU-1 (Papineni et al., 2002) .",
"As we can see, for both metrics, our single model outperforms all the other competing models with an evident margin, which is extremely hard considering the near-human per- formance.",
"If we ensemble the models trained with different random seeds and hyper-parameters, the results can be further improved and outperform the ensemble model in Tan et al.",
"(2017) , especially in terms of the BLEU-1.",
"Results on MS-MARCO Results on DuReader The results of our model and several baseline systems on the test set of DuReader are shown in Table 4 .",
"The BiDAF and Match-LSTM models are provided as two baseline systems (He et al., 2017) .",
"Based on BiDAF, as is described in Section 3.2, we tried a new paragraph selection strategy by employing a paragraph ranking (PR) model.",
"We can see that this paragraph ranking can boost the BiDAF baseline significantly.",
"Finally, we implement our system based on this new strategy, and our system (single model) achieves further improvement by a large margin.",
"Question: What is the difference between a mixed and pure culture Scores Answer Candidates: Boundary Content Verification [1] A culture is a society's total way of living and a society is a group .",
".",
".",
"1.0 × 10 −2 1.0 × 10 −1 1.1 × 10 −1 [2] The mixed economy is a balance between socialism and capitalism.",
"1.0 × 10 −4 4.0 × 10 −2 3.2 × 10 −2 [3] A pure culture is one in which only one kind of microbial species is .",
".",
".",
"5.5 × 10 −3 7.7 × 10 −2 1.2 × 10 −1 [4] A pure culture comprises a single species or strains.",
"A mixed .",
".",
".",
"2.7 × 10 −3 8.1 × 10 −2 1.3 × 10 −1 [5] A pure culture is a culture consisting of only one strain.",
"5.8 × 10 −4 7.9 × 10 −2 5.1 × 10 −2 [6] A pure culture is one in which only one kind of microbial species .",
".",
".",
"5.8 × 10 −3 9.1 × 10 −2 2.7 × 10 −1 .",
".",
".",
".",
".",
".",
".",
".",
".",
".",
".",
".",
"Analysis and Discussion Ablation Study To get better insight into our system, we conduct in-depth ablation study on the development set of MS-MARCO, which is shown in Table 5 .",
"Following Tan et al.",
"(2017) , we mainly focus on the ROUGE-L score that is averaged case by case.",
"We first evaluate the answer verification by ablating the cross-passage verification model so that the verification loss and verification score will not be used during training and testing.",
"Then we remove the content model in order to test the necessity of modeling the content of the answer.",
"Since we don't have the content scores, we use the boundary probabilities instead to compute the answer representation for verification.",
"Next, to show the benefits of joint training, we train the boundary model separately from the other two models.",
"Finally, we remove the yes/no classification in order to show the real improvement of our end-toend model compared with the baseline method that predicts the answer with only the boundary model.",
"From Table 5 , we can see that the answer verification makes a great contribution to the overall improvement, which confirms our hypothesis that cross-passage answer verification is useful for the multi-passage MRC.",
"For the ablation of the content model, we analyze that it will not only affect the content score itself, but also violate the verification model since the content probabilities are necessary for the answer representation, which will be further analyzed in Section 4.3.",
"Another discovery is that jointly training the three models can provide great benefits, which shows that the three tasks are actually closely related and can boost each other with shared representations at bottom layers.",
"At last, comparing our method with the baseline, we achieve an improvement of nearly 3 points without the yes/no classification.",
"This significant improvement proves the effectiveness of our approach.",
"Case Study To demonstrate how each module of our model takes effect when predicting the final answer, we conduct a case study in Table 6 with the same example that we discussed in Section 1.",
"For each answer candidate, we list three scores predicted by the boundary model, content model and verification model respectively.",
"On the one hand, we can see that these three scores generally have some relevance.",
"For example, the second candidate is given lowest scores by all the three models.",
"We analyze that this is because the models share the same encoding and matching layers at bottom level and this relevance guarantees that the content and verification models will not violate the boundary model too much.",
"On the other hand, we also see that the verification score can really make a difference here when the boundary model makes an incorrect decision among the confusing answer candidates ([1], [3], [4], [6] ).",
"Besides, as we expected, the verification model tends to give higher scores for those answers that have semantic commonality with each other ([3] , [4], [6] ), which are all valid answers in this case.",
"By multiplying the three scores, our model finally predicts the answer correctly.",
"Necessity of the Content Model In our model, we compute the answer representation based on the content probabilities predicted by a separate content model instead of directly using the boundary probabilities.",
"We argue that this content model is necessary for our answer verification process.",
"Figure 2 plots the predicted content probabilities as well as the boundary probabilities The noun charge unit has 1 sense : 1 .",
"a measure of the quantity of electricity -LRB-determined by the amount of an electric current and the time for which it flows -RRB-.",
"familiarity info : charge unit used as a noun is very rare .",
"start probability end probability content probability Figure 2 : The boundary probabilities and content probabilities for the words in a passage for a passage.",
"We can see that the boundary and content probabilities capture different aspects of the answer.",
"Since answer candidates usually have similar boundary words, if we compute the answer representation based on the boundary probabilities, it's difficult to model the real difference among different answer candidates.",
"On the contrary, with the content probabilities, we pay more attention to the content part of the answer, which can provide more distinguishable information for verifying the correct answer.",
"Furthermore, the content probabilities can also adjust the weights of the words within the answer span so that unimportant words (e.g.",
"\"and\" and \".\")",
"get lower weights in the final answer representation.",
"We believe that this refined representation is also good for the answer verification process.",
"Related Work Machine reading comprehension made rapid progress in recent years, especially for singlepassage MRC task, such as SQuAD (Rajpurkar et al., 2016) .",
"Mainstream studies (Seo et al., 2016; Wang and Jiang, 2016; Xiong et al., 2016) treat reading comprehension as extracting answer span from the given passage, which is usually achieved by predicting the start and end position of the answer.",
"We implement our boundary model similarly by employing the boundary-based pointer network (Wang and Jiang, 2016) .",
"Another inspiring work is from Wang et al.",
"(2017c) , where the authors propose to match the passage against itself so that the representation can aggregate evidence from the whole passage.",
"Our verification model adopts a similar idea.",
"However, we collect information across passages and our attention is based on the answer representation, which is much more efficient than attention over all passages.",
"For the model training, Xiong et al.",
"(2017) argues that the boundary loss encourages exact answers at the cost of penalizing overlapping answers.",
"Therefore they propose a mixed objective that incorporates rewards derived from word overlap.",
"Our joint training approach has a similar function.",
"By taking the content and verification loss into consideration, our model will give less loss for overlapping answers than those unmatched answers, and our loss function is totally differentiable.",
"Recently, we also see emerging interests in multi-passage MRC from both the academic (Dunn et al., 2017; Joshi et al., 2017) and industrial community (Nguyen et al., 2016; He et al., 2017) .",
"Early studies (Shen et al., 2017; Wang et al., 2017c) usually concat those passages and employ the same models designed for singlepassage MRC.",
"However, more and more latest studies start to design specific methods that can read multiple passages more effectively.",
"In the aspect of passage selection, Wang et al.",
"(2017a) introduced a pipelined approach that rank the passages first and then read the selected passages for answering questions.",
"Tan et al.",
"(2017) treats the passage ranking as an auxiliary task that can be trained jointly with the reading comprehension model.",
"Actually, the target of our answer verification is very similar to that of the passage selection, while we pay more attention to the answer content and the answer verification process.",
"Speaking of the answer verification, Wang et al.",
"(2017b) has a similar motivation to ours.",
"They attempt to aggregate the evidence from different passages and choose the final answer from n-best candidates.",
"However, they implement their idea as a separate reranking step after reading comprehension, while our answer verification is a component of the whole model that can be trained end-to-end.",
"Conclusion In this paper, we propose an end-to-end framework to tackle the multi-passage MRC task .",
"We creatively design three different modules in our model, which can find the answer boundary, model the answer content and conduct cross-passage answer verification respectively.",
"All these three modules can be trained with different forms of the answer labels and training them jointly can provide further improvement.",
"The experimental results demonstrate that our model outperforms the baseline models by a large margin and achieves the state-of-the-art performance on two challenging datasets, both of which are designed for MRC on real web data."
]
} | {
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.3",
"2.4",
"2.5",
"3",
"3.1",
"3.2",
"3.4",
"4.1",
"4.2",
"4.3",
"5",
"6"
],
"paper_header_content": [
"Introduction",
"Question and Passage Modeling",
"Answer Boundary Prediction",
"Answer Content Modeling",
"Cross-Passage Answer Verification",
"Joint Training and Prediction",
"Experiments",
"Datasets",
"Implementation Details",
"Results on DuReader",
"Ablation Study",
"Case Study",
"Necessity of the Content Model",
"Related Work",
"Conclusion"
]
} | GEM-SciDuet-train-28#paper-1035#slide-3 | Overview of Our Model | Question Passage | Passage 2 os Passage n
Answer A, Answer A \
oe ere ae ~ """eihteal =6COLtiietet~*e( weit LCC . Final
Prediction Pstart) | P(end) Perera) P(end)
poner nna pn pn an fn ap I t weighted . Final!
Answer Content { A I Modeling ( nswer I | Question Passage | Passage 2 os Passage n
Answer A, Answer A \
oe ere ae ~ """eihteal =6COLtiietet~*e( weit LCC . Final
Prediction Pstart) | P(end) Perera) P(end)
poner nna pn pn an fn ap I t weighted . Final!
Answer Content { A I Modeling ( nswer I | [] |
GEM-SciDuet-train-28#paper-1035#slide-4 | 1035 | Multi-Passage Machine Reading Comprehension with Cross-Passage Answer Verification | Machine reading comprehension (MRC) on real web data usually requires the machine to answer a question by analyzing multiple passages retrieved by search engine. Compared with MRC on a single passage, multi-passage MRC is more challenging, since we are likely to get multiple confusing answer candidates from different passages. To address this problem, we propose an end-to-end neural model that enables those answer candidates from different passages to verify each other based on their content representations. Specifically, we jointly train three modules that can predict the final answer based on three factors: the answer boundary, the answer content and the cross-passage answer verification. The experimental results show that our method outperforms the baseline by a large margin and achieves the state-of-the-art performance on the English MS-MARCO dataset and the Chinese DuReader dataset, both of which are designed for MRC in real-world settings. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231
],
"paper_content_text": [
"Introduction Machine reading comprehension (MRC), empowering computers with the ability to acquire knowledge and answer questions from textual data, is believed to be a crucial step in building a general intelligent agent (Chen et al., 2016) .",
"Recent years have seen rapid growth in the MRC community.",
"With the release of various datasets, the MRC task has evolved from the early cloze-style test (Hermann et al., 2015; Hill et al., 2015) to answer extraction from a single passage (Rajpurkar et al., 2016) and to the latest more complex question answering on web data (Nguyen et al., 2016; Dunn et al., 2017; He et al., 2017) .",
"Great efforts have also been made to develop models for these MRC tasks , especially for the answer extraction on single passage (Wang and Jiang, 2016; Seo et al., 2016; Pan et al., 2017) .",
"A significant milestone is that several MRC models have exceeded the performance of human annotators on the SQuAD dataset 1 (Rajpurkar et al., 2016 ).",
"However, this success on single Wikipedia passage is still not adequate, considering the ultimate goal of reading the whole web.",
"Therefore, several latest datasets (Nguyen et al., 2016; He et al., 2017; Dunn et al., 2017) attempt to design the MRC tasks in more realistic settings by involving search engines.",
"For each question, they use the search engine to retrieve multiple passages and the MRC models are required to read these passages in order to give the final answer.",
"One of the intrinsic challenges for such multipassage MRC is that since all the passages are question-related but usually independently written, it's probable that multiple confusing answer candidates (correct or incorrect) exist.",
"Table 1 shows an example from MS-MARCO.",
"We can see that all the answer candidates have semantic matching with the question while they are literally different and some of them are even incorrect.",
"As is shown by Jia and Liang (2017) , these confusing answer candidates could be quite difficult for MRC models to distinguish.",
"Therefore, special consideration is required for such multi-passage MRC problem.",
"In this paper, we propose to leverage the answer candidates from different passages to verify the final correct answer and rule out the noisy incorrect answers.",
"Our hypothesis is that the cor-Question: What is the difference between a mixed and pure culture?",
"Passages: [1] A culture is a society's total way of living and a society is a group that live in a defined territory and participate in common culture.",
"While the answer given is in essence true, societies originally form for the express purpose to enhance .",
".",
".",
"[2] .",
".",
".",
"There has been resurgence in the economic system known as capitalism during the past two decades.",
"4.",
"The mixed economy is a balance between socialism and capitalism.",
"As a result, some institutions are owned and maintained by .",
".",
".",
"[3] A pure culture is one in which only one kind of microbial species is found whereas in mixed culture two or more microbial species formed colonies.",
"Culture on the other hand, is the lifestyle that the people in the country .",
".",
".",
"[4] Best Answer: A pure culture comprises a single species or strains.",
"A mixed culture is taken from a source and may contain multiple strains or species.",
"A contaminated culture contains organisms that derived from some place .",
".",
".",
"[5] .",
".",
".",
"It will be at that time when we can truly obtain a pure culture.",
"A pure culture is a culture consisting of only one strain.",
"You can obtain a pure culture by picking out a small portion of the mixed culture .",
".",
".",
"[6] A pure culture is one in which only one kind of microbial species is found whereas in mixed culture two or more microbial species formed colonies.",
"A pure culture is a culture consisting of only one strain.",
".",
".",
".",
"· · · · · · Reference Answer: A pure culture is one in which only one kind of microbial species is found whereas in mixed culture two or more microbial species formed colonies.",
"rect answers could occur more frequently in those passages and usually share some commonalities, while incorrect answers are usually different from one another.",
"The example in Table 1 demonstrates this phenomenon.",
"We can see that the answer candidates extracted from the last four passages are all valid answers to the question and they are semantically similar to each other, while the answer candidates from the other two passages are incorrect and there is no supportive information from other passages.",
"As human beings usually compare the answer candidates from different sources to deduce the final answer, we hope that MRC model can also benefit from the cross-passage answer verification process.",
"The overall framework of our model is demonstrated in Figure 1 , which consists of three modules.",
"First, we follow the boundary-based MRC models (Seo et al., 2016; Wang and Jiang, 2016) to find an answer candidate for each passage by identifying the start and end position of the answer ( Figure 2) .",
"Second, we model the meanings of the answer candidates extracted from those passages and use the content scores to measure the quality of the candidates from a second perspective.",
"Third, we conduct the answer verification by enabling each answer candidate to attend to the other candidates based on their representations.",
"We hope that the answer candidates can collect supportive information from each other according to their semantic similarities and further decide whether each candidate is correct or not.",
"Therefore, the final answer is determined by three factors: the boundary, the content and the crosspassage answer verification.",
"The three steps are modeled using different modules, which can be jointly trained in our end-to-end framework.",
"We conduct extensive experiments on the MS-MARCO (Nguyen et al., 2016) and DuReader (He et al., 2017) datasets.",
"The results show that our answer verification MRC model outperforms the baseline models by a large margin and achieves the state-of-the-art performance on both datasets.",
"Figure 1 gives an overview of our multi-passage MRC model which is mainly composed of three modules including answer boundary prediction, answer content modeling and answer verification.",
"First of all, we need to model the question and passages.",
"Following Seo et al.",
"(2016) , we compute the question-aware representation for each passage (Section 2.1).",
"Based on this representation, we employ a Pointer Network (Vinyals et al., 2015) to predict the start and end position of the answer in the module of answer boundary prediction (Section 2.2).",
"At the same time, with the answer content model (Section 2.3), we estimate whether each word should be included in the answer and thus obtain the answer representations.",
"Next, in the answer verification module (Section 2.4), each answer candidate can attend to the other answer candidates to collect supportive information and we compute one score for each candidate Figure 1 : Overview of our method for multi-passage machine reading comprehension to indicate whether it is correct or not according to the verification.",
"The final answer is determined by not only the boundary but also the answer content and its verification score (Section 2.5).",
"Our Approach Question and Passage Modeling Given a question Q and a set of passages {P i } retrieved by search engines, our task is to find the best concise answer to the question.",
"First, we formally present the details of modeling the question and passages.",
"Encoding We first map each word into the vector space by concatenating its word embedding and sum of its character embeddings.",
"Then we employ bi-directional LSTMs (BiLSTM) to encode the question Q and passages {P i } as follows: u Q t = BiLSTM Q (u Q t−1 , [e Q Q-P Matching One essential step in MRC is to match the question with passages so that important information can be highlighted.",
"We use the Attention Flow Layer (Seo et al., 2016) to conduct the Q-P matching in two directions.",
"The similarity matrix S ∈ R |Q|×|P i | between the question and passage i is changed to a simpler version, where the similarity between the t th word in the question and the k th word in passage i is computed as: S t,k = u Q t · u P i k (3) Then the context-to-question attention and question-to-context attention is applied strictly following Seo et al.",
"(2016) to obtain the questionaware passage representation {ũ P i t }.",
"We do not give the details here due to space limitation.",
"Next, another BiLSTM is applied in order to fuse the contextual information and get the new representation for each word in the passage, which is regarded as the match output: v P i t = BiLSTM M (v P i t−1 ,ũ P i t ) (4) Based on the passage representations, we introduce the three main modules of our model.",
"Answer Boundary Prediction To extract the answer span from passages, mainstream studies try to locate the boundary of the answer, which is called boundary model.",
"Following (Wang and Jiang, 2016) , we employ Pointer Network (Vinyals et al., 2015) to compute the probability of each word to be the start or end position of the span: g t k = w a 1 tanh(W a 2 [v P k , h a t−1 ]) (5) α t k = exp(g t k )/ |P| j=1 exp(g t j ) (6) c t = |P| k=1 α t k v P k (7) h a t = LSTM(h a t−1 , c t ) (8) By utilizing the attention weights, the probability of the k th word in the passage to be the start and end position of the answer is obtained as α 1 k and α 2 k .",
"It should be noted that the pointer network is applied to the concatenation of all passages, which is denoted as P so that the probabilities are comparable across passages.",
"This boundary model can be trained by minimizing the negative log probabilities of the true start and end indices: L boundary = − 1 N N i=1 (log α 1 y 1 i + log α 2 y 2 i ) (9) where N is the number of samples in the dataset and y 1 i , y 2 i are the gold start and end positions.",
"Answer Content Modeling Previous work employs the boundary model to find the text span with the maximum boundary score as the final answer.",
"However, in our context, besides locating the answer candidates, we also need to model their meanings in order to conduct the verification.",
"An intuitive method is to compute the representation of the answer candidates separately after extracting them, but it could be hard to train such model end-to-end.",
"Here, we propose a novel method that can obtain the representation of the answer candidates based on probabilities.",
"Specifically, we change the output layer of the classic MRC model.",
"Besides predicting the boundary probabilities for the words in the passages, we also predict whether each word should be included in the content of the answer.",
"The content probability of the k th word is computed as: p c k = sigmoid(w c 1 ReLU(W c 2 v P i k )) (10) Training this content model is also quite intuitive.",
"We transform the boundary labels into a continuous segment, which means the words within the answer span will be labeled as 1 and other words will be labeled as 0.",
"In this way, we define the loss function as the averaged cross entropy: L content = − 1 N 1 |P| N i=1 |P | j=1 [y c k log p c k + (1 − y c k ) log(1 − p c k )] (11) The content probabilities provide another view to measure the quality of the answer in addition to the boundary.",
"Moreover, with these probabilities, we can represent the answer from passage i as a weighted sum of all the word embeddings in this passage: r A i = 1 |P i | |P i | k=1 p c k [e P i k , c P i k ] (12) Cross-Passage Answer Verification The boundary model and the content model focus on extracting and modeling the answer within a single passage respectively, with little consideration of the cross-passage information.",
"However, as is discussed in Section 1, there could be multiple answer candidates from different passages and some of them may mislead the MRC model to make an incorrect prediction.",
"It's necessary to aggregate the information from different passages and choose the best one from those candidates.",
"Therefore, we propose a method to enable the answer candidates to exchange information and verify each other through the cross-passage answer verification process.",
"Given the representation of the answer candidates from all passages {r A i }, each answer candidate then attends to other candidates to collect supportive information via attention mechanism: s i,j = 0, if i = j, r A i · r A j , otherwise (13) α i,j = exp(s i,j )/ n k=1 exp(s i,k ) (14) r A i = n j=1 α i,j r A j (15) Herer A i is the collected verification information from other passages based on the attention weights.",
"Then we pass it together with the original representation r A i to a fully connected layer: g v i = w v [r A i ,r A i , r A i r A i ] (16) We further normalize these scores over all passages to get the verification score for answer candidate A i : p v i = exp(g v i )/ n j=1 exp(g v j ) (17) In order to train this verification model, we take the answer from the gold passage as the gold answer.",
"And the loss function can be formulated as the negative log probability of the correct answer: L verif y = − 1 N N i=1 log p v y v i (18) where y v i is the index of the correct answer in all the answer candidates of the i th instance .",
"Joint Training and Prediction As is described above, we define three objectives for the reading comprehension model over multiple passages: 1. finding the boundary of the answer; 2. predicting whether each word should be included in the content; 3. selecting the best answer via cross-passage answer verification.",
"According to our design, these three tasks can share the same embedding, encoding and matching layers.",
"Therefore, we propose to train them together as multi-task learning (Ruder, 2017) .",
"The joint objective function is formulated as follows: L = L boundary + β 1 L content + β 2 L verif y (19) where β 1 and β 2 are two hyper-parameters that control the weights of those tasks.",
"When predicting the final answer, we take the boundary score, content score and verification score into consideration.",
"We first extract the answer candidate A i that has the maximum boundary score from each passage i.",
"This boundary score is computed as the product of the start and end probability of the answer span.",
"Then for each answer candidate A i , we average the content probabilities of all its words as the content score of A i .",
"And we can also predict the verification score for A i using the verification model.",
"Therefore, the final answer can be selected from all the answer candidates according to the product of these three scores.",
"Experiments To verify the effectiveness of our model on multipassage machine reading comprehension, we conduct experiments on the MS-MARCO (Nguyen et al., 2016) and DuReader (He et al., 2017) datasets.",
"Our method achieves the state-of-the-art performance on both datasets.",
"Datasets We choose the MS-MARCO and DuReader datasets to test our method, since both of them are One prerequisite for answer verification is that there should be multiple correct answers so that they can verify each other.",
"Both the MS-MARCO and DuReader datasets require the human annotators to generate multiple answers if possible.",
"Table 2 shows the proportion of questions that have multiple answers.",
"However, the same answer that occurs many times is treated as one single answer here.",
"Therefore, we also report the proportion of questions that have multiple answer spans to match with the human-generated answers.",
"A span is taken as valid if it can achieve F1 score larger than 0.7 compared with any reference answer.",
"From these statistics, we can see that the phenomenon of multiple answers is quite common for both MS-MARCO and DuReader.",
"These answers will provide strong signals for answer verification if we can leverage them properly.",
"Implementation Details For MS-MARCO, we preprocess the corpus with the reversible tokenizer from Stanford CoreNLP and we choose the span that achieves the highest ROUGE-L score with the reference answers as the gold span for training.",
"We employ the 300-D pre-trained Glove embeddings (Pennington et al., 2014) and keep it fixed during training.",
"The character embeddings are randomly initialized with its dimension as 30.",
"For DuReader, we follow the preprocessing described in He et al.",
"(2017) .",
"We tune the hyper-parameters according to the Model ROUGE-L BLEU-1 FastQA Ext (Weissenborn et al., 2017) 33.67 33.93 Prediction (Wang and Jiang, 2016) 37.33 40.72 ReasoNet (Shen et al., 2017) 38.81 39.86 R-Net (Wang et al., 2017c) 42.89 42.22 S-Net (Tan et al., 2017) 45 Two simple yet effective technologies are employed to improve the final performance on these two datasets respectively.",
"For MS-MARCO, approximately 8% questions have the answers as Yes or No, which usually cannot be solved by extractive approach (Tan et al., 2017) .",
"We address this problem by training a simple Yes/No classifier for those questions with certain patterns (e.g., starting with \"is\").",
"Concretely, we simply change the output layer of the basic boundary model so that it can predict whether the answer is \"Yes\" or \"No\".",
"For DuReader, the retrieved document usually contains a large number of paragraphs that cannot be fed into MRC models directly (He et al., 2017) .",
"The original paper employs a simple a simple heuristic strategy to select a representative paragraph for each document, while we train a paragraph ranking model for this.",
"We will demonstrate the effects of these two technologies later.",
"Table 3 shows the results of our system and other state-of-the-art models on the MS-MARCO test set.",
"We adopt the official evaluation metrics, including ROUGE-L (Lin, 2004) and BLEU-1 (Papineni et al., 2002) .",
"As we can see, for both metrics, our single model outperforms all the other competing models with an evident margin, which is extremely hard considering the near-human per- formance.",
"If we ensemble the models trained with different random seeds and hyper-parameters, the results can be further improved and outperform the ensemble model in Tan et al.",
"(2017) , especially in terms of the BLEU-1.",
"Results on MS-MARCO Results on DuReader The results of our model and several baseline systems on the test set of DuReader are shown in Table 4 .",
"The BiDAF and Match-LSTM models are provided as two baseline systems (He et al., 2017) .",
"Based on BiDAF, as is described in Section 3.2, we tried a new paragraph selection strategy by employing a paragraph ranking (PR) model.",
"We can see that this paragraph ranking can boost the BiDAF baseline significantly.",
"Finally, we implement our system based on this new strategy, and our system (single model) achieves further improvement by a large margin.",
"Question: What is the difference between a mixed and pure culture Scores Answer Candidates: Boundary Content Verification [1] A culture is a society's total way of living and a society is a group .",
".",
".",
"1.0 × 10 −2 1.0 × 10 −1 1.1 × 10 −1 [2] The mixed economy is a balance between socialism and capitalism.",
"1.0 × 10 −4 4.0 × 10 −2 3.2 × 10 −2 [3] A pure culture is one in which only one kind of microbial species is .",
".",
".",
"5.5 × 10 −3 7.7 × 10 −2 1.2 × 10 −1 [4] A pure culture comprises a single species or strains.",
"A mixed .",
".",
".",
"2.7 × 10 −3 8.1 × 10 −2 1.3 × 10 −1 [5] A pure culture is a culture consisting of only one strain.",
"5.8 × 10 −4 7.9 × 10 −2 5.1 × 10 −2 [6] A pure culture is one in which only one kind of microbial species .",
".",
".",
"5.8 × 10 −3 9.1 × 10 −2 2.7 × 10 −1 .",
".",
".",
".",
".",
".",
".",
".",
".",
".",
".",
".",
"Analysis and Discussion Ablation Study To get better insight into our system, we conduct in-depth ablation study on the development set of MS-MARCO, which is shown in Table 5 .",
"Following Tan et al.",
"(2017) , we mainly focus on the ROUGE-L score that is averaged case by case.",
"We first evaluate the answer verification by ablating the cross-passage verification model so that the verification loss and verification score will not be used during training and testing.",
"Then we remove the content model in order to test the necessity of modeling the content of the answer.",
"Since we don't have the content scores, we use the boundary probabilities instead to compute the answer representation for verification.",
"Next, to show the benefits of joint training, we train the boundary model separately from the other two models.",
"Finally, we remove the yes/no classification in order to show the real improvement of our end-toend model compared with the baseline method that predicts the answer with only the boundary model.",
"From Table 5 , we can see that the answer verification makes a great contribution to the overall improvement, which confirms our hypothesis that cross-passage answer verification is useful for the multi-passage MRC.",
"For the ablation of the content model, we analyze that it will not only affect the content score itself, but also violate the verification model since the content probabilities are necessary for the answer representation, which will be further analyzed in Section 4.3.",
"Another discovery is that jointly training the three models can provide great benefits, which shows that the three tasks are actually closely related and can boost each other with shared representations at bottom layers.",
"At last, comparing our method with the baseline, we achieve an improvement of nearly 3 points without the yes/no classification.",
"This significant improvement proves the effectiveness of our approach.",
"Case Study To demonstrate how each module of our model takes effect when predicting the final answer, we conduct a case study in Table 6 with the same example that we discussed in Section 1.",
"For each answer candidate, we list three scores predicted by the boundary model, content model and verification model respectively.",
"On the one hand, we can see that these three scores generally have some relevance.",
"For example, the second candidate is given lowest scores by all the three models.",
"We analyze that this is because the models share the same encoding and matching layers at bottom level and this relevance guarantees that the content and verification models will not violate the boundary model too much.",
"On the other hand, we also see that the verification score can really make a difference here when the boundary model makes an incorrect decision among the confusing answer candidates ([1], [3], [4], [6] ).",
"Besides, as we expected, the verification model tends to give higher scores for those answers that have semantic commonality with each other ([3] , [4], [6] ), which are all valid answers in this case.",
"By multiplying the three scores, our model finally predicts the answer correctly.",
"Necessity of the Content Model In our model, we compute the answer representation based on the content probabilities predicted by a separate content model instead of directly using the boundary probabilities.",
"We argue that this content model is necessary for our answer verification process.",
"Figure 2 plots the predicted content probabilities as well as the boundary probabilities The noun charge unit has 1 sense : 1 .",
"a measure of the quantity of electricity -LRB-determined by the amount of an electric current and the time for which it flows -RRB-.",
"familiarity info : charge unit used as a noun is very rare .",
"start probability end probability content probability Figure 2 : The boundary probabilities and content probabilities for the words in a passage for a passage.",
"We can see that the boundary and content probabilities capture different aspects of the answer.",
"Since answer candidates usually have similar boundary words, if we compute the answer representation based on the boundary probabilities, it's difficult to model the real difference among different answer candidates.",
"On the contrary, with the content probabilities, we pay more attention to the content part of the answer, which can provide more distinguishable information for verifying the correct answer.",
"Furthermore, the content probabilities can also adjust the weights of the words within the answer span so that unimportant words (e.g.",
"\"and\" and \".\")",
"get lower weights in the final answer representation.",
"We believe that this refined representation is also good for the answer verification process.",
"Related Work Machine reading comprehension made rapid progress in recent years, especially for singlepassage MRC task, such as SQuAD (Rajpurkar et al., 2016) .",
"Mainstream studies (Seo et al., 2016; Wang and Jiang, 2016; Xiong et al., 2016) treat reading comprehension as extracting answer span from the given passage, which is usually achieved by predicting the start and end position of the answer.",
"We implement our boundary model similarly by employing the boundary-based pointer network (Wang and Jiang, 2016) .",
"Another inspiring work is from Wang et al.",
"(2017c) , where the authors propose to match the passage against itself so that the representation can aggregate evidence from the whole passage.",
"Our verification model adopts a similar idea.",
"However, we collect information across passages and our attention is based on the answer representation, which is much more efficient than attention over all passages.",
"For the model training, Xiong et al.",
"(2017) argues that the boundary loss encourages exact answers at the cost of penalizing overlapping answers.",
"Therefore they propose a mixed objective that incorporates rewards derived from word overlap.",
"Our joint training approach has a similar function.",
"By taking the content and verification loss into consideration, our model will give less loss for overlapping answers than those unmatched answers, and our loss function is totally differentiable.",
"Recently, we also see emerging interests in multi-passage MRC from both the academic (Dunn et al., 2017; Joshi et al., 2017) and industrial community (Nguyen et al., 2016; He et al., 2017) .",
"Early studies (Shen et al., 2017; Wang et al., 2017c) usually concat those passages and employ the same models designed for singlepassage MRC.",
"However, more and more latest studies start to design specific methods that can read multiple passages more effectively.",
"In the aspect of passage selection, Wang et al.",
"(2017a) introduced a pipelined approach that rank the passages first and then read the selected passages for answering questions.",
"Tan et al.",
"(2017) treats the passage ranking as an auxiliary task that can be trained jointly with the reading comprehension model.",
"Actually, the target of our answer verification is very similar to that of the passage selection, while we pay more attention to the answer content and the answer verification process.",
"Speaking of the answer verification, Wang et al.",
"(2017b) has a similar motivation to ours.",
"They attempt to aggregate the evidence from different passages and choose the final answer from n-best candidates.",
"However, they implement their idea as a separate reranking step after reading comprehension, while our answer verification is a component of the whole model that can be trained end-to-end.",
"Conclusion In this paper, we propose an end-to-end framework to tackle the multi-passage MRC task .",
"We creatively design three different modules in our model, which can find the answer boundary, model the answer content and conduct cross-passage answer verification respectively.",
"All these three modules can be trained with different forms of the answer labels and training them jointly can provide further improvement.",
"The experimental results demonstrate that our model outperforms the baseline models by a large margin and achieves the state-of-the-art performance on two challenging datasets, both of which are designed for MRC on real web data."
]
} | {
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.3",
"2.4",
"2.5",
"3",
"3.1",
"3.2",
"3.4",
"4.1",
"4.2",
"4.3",
"5",
"6"
],
"paper_header_content": [
"Introduction",
"Question and Passage Modeling",
"Answer Boundary Prediction",
"Answer Content Modeling",
"Cross-Passage Answer Verification",
"Joint Training and Prediction",
"Experiments",
"Datasets",
"Implementation Details",
"Results on DuReader",
"Ablation Study",
"Case Study",
"Necessity of the Content Model",
"Related Work",
"Conclusion"
]
} | GEM-SciDuet-train-28#paper-1035#slide-4 | Input | Question Passage 1 Passage 2 | Question Passage 1 Passage 2 | [] |
GEM-SciDuet-train-28#paper-1035#slide-5 | 1035 | Multi-Passage Machine Reading Comprehension with Cross-Passage Answer Verification | Machine reading comprehension (MRC) on real web data usually requires the machine to answer a question by analyzing multiple passages retrieved by search engine. Compared with MRC on a single passage, multi-passage MRC is more challenging, since we are likely to get multiple confusing answer candidates from different passages. To address this problem, we propose an end-to-end neural model that enables those answer candidates from different passages to verify each other based on their content representations. Specifically, we jointly train three modules that can predict the final answer based on three factors: the answer boundary, the answer content and the cross-passage answer verification. The experimental results show that our method outperforms the baseline by a large margin and achieves the state-of-the-art performance on the English MS-MARCO dataset and the Chinese DuReader dataset, both of which are designed for MRC in real-world settings. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231
],
"paper_content_text": [
"Introduction Machine reading comprehension (MRC), empowering computers with the ability to acquire knowledge and answer questions from textual data, is believed to be a crucial step in building a general intelligent agent (Chen et al., 2016) .",
"Recent years have seen rapid growth in the MRC community.",
"With the release of various datasets, the MRC task has evolved from the early cloze-style test (Hermann et al., 2015; Hill et al., 2015) to answer extraction from a single passage (Rajpurkar et al., 2016) and to the latest more complex question answering on web data (Nguyen et al., 2016; Dunn et al., 2017; He et al., 2017) .",
"Great efforts have also been made to develop models for these MRC tasks , especially for the answer extraction on single passage (Wang and Jiang, 2016; Seo et al., 2016; Pan et al., 2017) .",
"A significant milestone is that several MRC models have exceeded the performance of human annotators on the SQuAD dataset 1 (Rajpurkar et al., 2016 ).",
"However, this success on single Wikipedia passage is still not adequate, considering the ultimate goal of reading the whole web.",
"Therefore, several latest datasets (Nguyen et al., 2016; He et al., 2017; Dunn et al., 2017) attempt to design the MRC tasks in more realistic settings by involving search engines.",
"For each question, they use the search engine to retrieve multiple passages and the MRC models are required to read these passages in order to give the final answer.",
"One of the intrinsic challenges for such multipassage MRC is that since all the passages are question-related but usually independently written, it's probable that multiple confusing answer candidates (correct or incorrect) exist.",
"Table 1 shows an example from MS-MARCO.",
"We can see that all the answer candidates have semantic matching with the question while they are literally different and some of them are even incorrect.",
"As is shown by Jia and Liang (2017) , these confusing answer candidates could be quite difficult for MRC models to distinguish.",
"Therefore, special consideration is required for such multi-passage MRC problem.",
"In this paper, we propose to leverage the answer candidates from different passages to verify the final correct answer and rule out the noisy incorrect answers.",
"Our hypothesis is that the cor-Question: What is the difference between a mixed and pure culture?",
"Passages: [1] A culture is a society's total way of living and a society is a group that live in a defined territory and participate in common culture.",
"While the answer given is in essence true, societies originally form for the express purpose to enhance .",
".",
".",
"[2] .",
".",
".",
"There has been resurgence in the economic system known as capitalism during the past two decades.",
"4.",
"The mixed economy is a balance between socialism and capitalism.",
"As a result, some institutions are owned and maintained by .",
".",
".",
"[3] A pure culture is one in which only one kind of microbial species is found whereas in mixed culture two or more microbial species formed colonies.",
"Culture on the other hand, is the lifestyle that the people in the country .",
".",
".",
"[4] Best Answer: A pure culture comprises a single species or strains.",
"A mixed culture is taken from a source and may contain multiple strains or species.",
"A contaminated culture contains organisms that derived from some place .",
".",
".",
"[5] .",
".",
".",
"It will be at that time when we can truly obtain a pure culture.",
"A pure culture is a culture consisting of only one strain.",
"You can obtain a pure culture by picking out a small portion of the mixed culture .",
".",
".",
"[6] A pure culture is one in which only one kind of microbial species is found whereas in mixed culture two or more microbial species formed colonies.",
"A pure culture is a culture consisting of only one strain.",
".",
".",
".",
"· · · · · · Reference Answer: A pure culture is one in which only one kind of microbial species is found whereas in mixed culture two or more microbial species formed colonies.",
"rect answers could occur more frequently in those passages and usually share some commonalities, while incorrect answers are usually different from one another.",
"The example in Table 1 demonstrates this phenomenon.",
"We can see that the answer candidates extracted from the last four passages are all valid answers to the question and they are semantically similar to each other, while the answer candidates from the other two passages are incorrect and there is no supportive information from other passages.",
"As human beings usually compare the answer candidates from different sources to deduce the final answer, we hope that MRC model can also benefit from the cross-passage answer verification process.",
"The overall framework of our model is demonstrated in Figure 1 , which consists of three modules.",
"First, we follow the boundary-based MRC models (Seo et al., 2016; Wang and Jiang, 2016) to find an answer candidate for each passage by identifying the start and end position of the answer ( Figure 2) .",
"Second, we model the meanings of the answer candidates extracted from those passages and use the content scores to measure the quality of the candidates from a second perspective.",
"Third, we conduct the answer verification by enabling each answer candidate to attend to the other candidates based on their representations.",
"We hope that the answer candidates can collect supportive information from each other according to their semantic similarities and further decide whether each candidate is correct or not.",
"Therefore, the final answer is determined by three factors: the boundary, the content and the crosspassage answer verification.",
"The three steps are modeled using different modules, which can be jointly trained in our end-to-end framework.",
"We conduct extensive experiments on the MS-MARCO (Nguyen et al., 2016) and DuReader (He et al., 2017) datasets.",
"The results show that our answer verification MRC model outperforms the baseline models by a large margin and achieves the state-of-the-art performance on both datasets.",
"Figure 1 gives an overview of our multi-passage MRC model which is mainly composed of three modules including answer boundary prediction, answer content modeling and answer verification.",
"First of all, we need to model the question and passages.",
"Following Seo et al.",
"(2016) , we compute the question-aware representation for each passage (Section 2.1).",
"Based on this representation, we employ a Pointer Network (Vinyals et al., 2015) to predict the start and end position of the answer in the module of answer boundary prediction (Section 2.2).",
"At the same time, with the answer content model (Section 2.3), we estimate whether each word should be included in the answer and thus obtain the answer representations.",
"Next, in the answer verification module (Section 2.4), each answer candidate can attend to the other answer candidates to collect supportive information and we compute one score for each candidate Figure 1 : Overview of our method for multi-passage machine reading comprehension to indicate whether it is correct or not according to the verification.",
"The final answer is determined by not only the boundary but also the answer content and its verification score (Section 2.5).",
"Our Approach Question and Passage Modeling Given a question Q and a set of passages {P i } retrieved by search engines, our task is to find the best concise answer to the question.",
"First, we formally present the details of modeling the question and passages.",
"Encoding We first map each word into the vector space by concatenating its word embedding and sum of its character embeddings.",
"Then we employ bi-directional LSTMs (BiLSTM) to encode the question Q and passages {P i } as follows: u Q t = BiLSTM Q (u Q t−1 , [e Q Q-P Matching One essential step in MRC is to match the question with passages so that important information can be highlighted.",
"We use the Attention Flow Layer (Seo et al., 2016) to conduct the Q-P matching in two directions.",
"The similarity matrix S ∈ R |Q|×|P i | between the question and passage i is changed to a simpler version, where the similarity between the t th word in the question and the k th word in passage i is computed as: S t,k = u Q t · u P i k (3) Then the context-to-question attention and question-to-context attention is applied strictly following Seo et al.",
"(2016) to obtain the questionaware passage representation {ũ P i t }.",
"We do not give the details here due to space limitation.",
"Next, another BiLSTM is applied in order to fuse the contextual information and get the new representation for each word in the passage, which is regarded as the match output: v P i t = BiLSTM M (v P i t−1 ,ũ P i t ) (4) Based on the passage representations, we introduce the three main modules of our model.",
"Answer Boundary Prediction To extract the answer span from passages, mainstream studies try to locate the boundary of the answer, which is called boundary model.",
"Following (Wang and Jiang, 2016) , we employ Pointer Network (Vinyals et al., 2015) to compute the probability of each word to be the start or end position of the span: g t k = w a 1 tanh(W a 2 [v P k , h a t−1 ]) (5) α t k = exp(g t k )/ |P| j=1 exp(g t j ) (6) c t = |P| k=1 α t k v P k (7) h a t = LSTM(h a t−1 , c t ) (8) By utilizing the attention weights, the probability of the k th word in the passage to be the start and end position of the answer is obtained as α 1 k and α 2 k .",
"It should be noted that the pointer network is applied to the concatenation of all passages, which is denoted as P so that the probabilities are comparable across passages.",
"This boundary model can be trained by minimizing the negative log probabilities of the true start and end indices: L boundary = − 1 N N i=1 (log α 1 y 1 i + log α 2 y 2 i ) (9) where N is the number of samples in the dataset and y 1 i , y 2 i are the gold start and end positions.",
"Answer Content Modeling Previous work employs the boundary model to find the text span with the maximum boundary score as the final answer.",
"However, in our context, besides locating the answer candidates, we also need to model their meanings in order to conduct the verification.",
"An intuitive method is to compute the representation of the answer candidates separately after extracting them, but it could be hard to train such model end-to-end.",
"Here, we propose a novel method that can obtain the representation of the answer candidates based on probabilities.",
"Specifically, we change the output layer of the classic MRC model.",
"Besides predicting the boundary probabilities for the words in the passages, we also predict whether each word should be included in the content of the answer.",
"The content probability of the k th word is computed as: p c k = sigmoid(w c 1 ReLU(W c 2 v P i k )) (10) Training this content model is also quite intuitive.",
"We transform the boundary labels into a continuous segment, which means the words within the answer span will be labeled as 1 and other words will be labeled as 0.",
"In this way, we define the loss function as the averaged cross entropy: L content = − 1 N 1 |P| N i=1 |P | j=1 [y c k log p c k + (1 − y c k ) log(1 − p c k )] (11) The content probabilities provide another view to measure the quality of the answer in addition to the boundary.",
"Moreover, with these probabilities, we can represent the answer from passage i as a weighted sum of all the word embeddings in this passage: r A i = 1 |P i | |P i | k=1 p c k [e P i k , c P i k ] (12) Cross-Passage Answer Verification The boundary model and the content model focus on extracting and modeling the answer within a single passage respectively, with little consideration of the cross-passage information.",
"However, as is discussed in Section 1, there could be multiple answer candidates from different passages and some of them may mislead the MRC model to make an incorrect prediction.",
"It's necessary to aggregate the information from different passages and choose the best one from those candidates.",
"Therefore, we propose a method to enable the answer candidates to exchange information and verify each other through the cross-passage answer verification process.",
"Given the representation of the answer candidates from all passages {r A i }, each answer candidate then attends to other candidates to collect supportive information via attention mechanism: s i,j = 0, if i = j, r A i · r A j , otherwise (13) α i,j = exp(s i,j )/ n k=1 exp(s i,k ) (14) r A i = n j=1 α i,j r A j (15) Herer A i is the collected verification information from other passages based on the attention weights.",
"Then we pass it together with the original representation r A i to a fully connected layer: g v i = w v [r A i ,r A i , r A i r A i ] (16) We further normalize these scores over all passages to get the verification score for answer candidate A i : p v i = exp(g v i )/ n j=1 exp(g v j ) (17) In order to train this verification model, we take the answer from the gold passage as the gold answer.",
"And the loss function can be formulated as the negative log probability of the correct answer: L verif y = − 1 N N i=1 log p v y v i (18) where y v i is the index of the correct answer in all the answer candidates of the i th instance .",
"Joint Training and Prediction As is described above, we define three objectives for the reading comprehension model over multiple passages: 1. finding the boundary of the answer; 2. predicting whether each word should be included in the content; 3. selecting the best answer via cross-passage answer verification.",
"According to our design, these three tasks can share the same embedding, encoding and matching layers.",
"Therefore, we propose to train them together as multi-task learning (Ruder, 2017) .",
"The joint objective function is formulated as follows: L = L boundary + β 1 L content + β 2 L verif y (19) where β 1 and β 2 are two hyper-parameters that control the weights of those tasks.",
"When predicting the final answer, we take the boundary score, content score and verification score into consideration.",
"We first extract the answer candidate A i that has the maximum boundary score from each passage i.",
"This boundary score is computed as the product of the start and end probability of the answer span.",
"Then for each answer candidate A i , we average the content probabilities of all its words as the content score of A i .",
"And we can also predict the verification score for A i using the verification model.",
"Therefore, the final answer can be selected from all the answer candidates according to the product of these three scores.",
"Experiments To verify the effectiveness of our model on multipassage machine reading comprehension, we conduct experiments on the MS-MARCO (Nguyen et al., 2016) and DuReader (He et al., 2017) datasets.",
"Our method achieves the state-of-the-art performance on both datasets.",
"Datasets We choose the MS-MARCO and DuReader datasets to test our method, since both of them are One prerequisite for answer verification is that there should be multiple correct answers so that they can verify each other.",
"Both the MS-MARCO and DuReader datasets require the human annotators to generate multiple answers if possible.",
"Table 2 shows the proportion of questions that have multiple answers.",
"However, the same answer that occurs many times is treated as one single answer here.",
"Therefore, we also report the proportion of questions that have multiple answer spans to match with the human-generated answers.",
"A span is taken as valid if it can achieve F1 score larger than 0.7 compared with any reference answer.",
"From these statistics, we can see that the phenomenon of multiple answers is quite common for both MS-MARCO and DuReader.",
"These answers will provide strong signals for answer verification if we can leverage them properly.",
"Implementation Details For MS-MARCO, we preprocess the corpus with the reversible tokenizer from Stanford CoreNLP and we choose the span that achieves the highest ROUGE-L score with the reference answers as the gold span for training.",
"We employ the 300-D pre-trained Glove embeddings (Pennington et al., 2014) and keep it fixed during training.",
"The character embeddings are randomly initialized with its dimension as 30.",
"For DuReader, we follow the preprocessing described in He et al.",
"(2017) .",
"We tune the hyper-parameters according to the Model ROUGE-L BLEU-1 FastQA Ext (Weissenborn et al., 2017) 33.67 33.93 Prediction (Wang and Jiang, 2016) 37.33 40.72 ReasoNet (Shen et al., 2017) 38.81 39.86 R-Net (Wang et al., 2017c) 42.89 42.22 S-Net (Tan et al., 2017) 45 Two simple yet effective technologies are employed to improve the final performance on these two datasets respectively.",
"For MS-MARCO, approximately 8% questions have the answers as Yes or No, which usually cannot be solved by extractive approach (Tan et al., 2017) .",
"We address this problem by training a simple Yes/No classifier for those questions with certain patterns (e.g., starting with \"is\").",
"Concretely, we simply change the output layer of the basic boundary model so that it can predict whether the answer is \"Yes\" or \"No\".",
"For DuReader, the retrieved document usually contains a large number of paragraphs that cannot be fed into MRC models directly (He et al., 2017) .",
"The original paper employs a simple a simple heuristic strategy to select a representative paragraph for each document, while we train a paragraph ranking model for this.",
"We will demonstrate the effects of these two technologies later.",
"Table 3 shows the results of our system and other state-of-the-art models on the MS-MARCO test set.",
"We adopt the official evaluation metrics, including ROUGE-L (Lin, 2004) and BLEU-1 (Papineni et al., 2002) .",
"As we can see, for both metrics, our single model outperforms all the other competing models with an evident margin, which is extremely hard considering the near-human per- formance.",
"If we ensemble the models trained with different random seeds and hyper-parameters, the results can be further improved and outperform the ensemble model in Tan et al.",
"(2017) , especially in terms of the BLEU-1.",
"Results on MS-MARCO Results on DuReader The results of our model and several baseline systems on the test set of DuReader are shown in Table 4 .",
"The BiDAF and Match-LSTM models are provided as two baseline systems (He et al., 2017) .",
"Based on BiDAF, as is described in Section 3.2, we tried a new paragraph selection strategy by employing a paragraph ranking (PR) model.",
"We can see that this paragraph ranking can boost the BiDAF baseline significantly.",
"Finally, we implement our system based on this new strategy, and our system (single model) achieves further improvement by a large margin.",
"Question: What is the difference between a mixed and pure culture Scores Answer Candidates: Boundary Content Verification [1] A culture is a society's total way of living and a society is a group .",
".",
".",
"1.0 × 10 −2 1.0 × 10 −1 1.1 × 10 −1 [2] The mixed economy is a balance between socialism and capitalism.",
"1.0 × 10 −4 4.0 × 10 −2 3.2 × 10 −2 [3] A pure culture is one in which only one kind of microbial species is .",
".",
".",
"5.5 × 10 −3 7.7 × 10 −2 1.2 × 10 −1 [4] A pure culture comprises a single species or strains.",
"A mixed .",
".",
".",
"2.7 × 10 −3 8.1 × 10 −2 1.3 × 10 −1 [5] A pure culture is a culture consisting of only one strain.",
"5.8 × 10 −4 7.9 × 10 −2 5.1 × 10 −2 [6] A pure culture is one in which only one kind of microbial species .",
".",
".",
"5.8 × 10 −3 9.1 × 10 −2 2.7 × 10 −1 .",
".",
".",
".",
".",
".",
".",
".",
".",
".",
".",
".",
"Analysis and Discussion Ablation Study To get better insight into our system, we conduct in-depth ablation study on the development set of MS-MARCO, which is shown in Table 5 .",
"Following Tan et al.",
"(2017) , we mainly focus on the ROUGE-L score that is averaged case by case.",
"We first evaluate the answer verification by ablating the cross-passage verification model so that the verification loss and verification score will not be used during training and testing.",
"Then we remove the content model in order to test the necessity of modeling the content of the answer.",
"Since we don't have the content scores, we use the boundary probabilities instead to compute the answer representation for verification.",
"Next, to show the benefits of joint training, we train the boundary model separately from the other two models.",
"Finally, we remove the yes/no classification in order to show the real improvement of our end-toend model compared with the baseline method that predicts the answer with only the boundary model.",
"From Table 5 , we can see that the answer verification makes a great contribution to the overall improvement, which confirms our hypothesis that cross-passage answer verification is useful for the multi-passage MRC.",
"For the ablation of the content model, we analyze that it will not only affect the content score itself, but also violate the verification model since the content probabilities are necessary for the answer representation, which will be further analyzed in Section 4.3.",
"Another discovery is that jointly training the three models can provide great benefits, which shows that the three tasks are actually closely related and can boost each other with shared representations at bottom layers.",
"At last, comparing our method with the baseline, we achieve an improvement of nearly 3 points without the yes/no classification.",
"This significant improvement proves the effectiveness of our approach.",
"Case Study To demonstrate how each module of our model takes effect when predicting the final answer, we conduct a case study in Table 6 with the same example that we discussed in Section 1.",
"For each answer candidate, we list three scores predicted by the boundary model, content model and verification model respectively.",
"On the one hand, we can see that these three scores generally have some relevance.",
"For example, the second candidate is given lowest scores by all the three models.",
"We analyze that this is because the models share the same encoding and matching layers at bottom level and this relevance guarantees that the content and verification models will not violate the boundary model too much.",
"On the other hand, we also see that the verification score can really make a difference here when the boundary model makes an incorrect decision among the confusing answer candidates ([1], [3], [4], [6] ).",
"Besides, as we expected, the verification model tends to give higher scores for those answers that have semantic commonality with each other ([3] , [4], [6] ), which are all valid answers in this case.",
"By multiplying the three scores, our model finally predicts the answer correctly.",
"Necessity of the Content Model In our model, we compute the answer representation based on the content probabilities predicted by a separate content model instead of directly using the boundary probabilities.",
"We argue that this content model is necessary for our answer verification process.",
"Figure 2 plots the predicted content probabilities as well as the boundary probabilities The noun charge unit has 1 sense : 1 .",
"a measure of the quantity of electricity -LRB-determined by the amount of an electric current and the time for which it flows -RRB-.",
"familiarity info : charge unit used as a noun is very rare .",
"start probability end probability content probability Figure 2 : The boundary probabilities and content probabilities for the words in a passage for a passage.",
"We can see that the boundary and content probabilities capture different aspects of the answer.",
"Since answer candidates usually have similar boundary words, if we compute the answer representation based on the boundary probabilities, it's difficult to model the real difference among different answer candidates.",
"On the contrary, with the content probabilities, we pay more attention to the content part of the answer, which can provide more distinguishable information for verifying the correct answer.",
"Furthermore, the content probabilities can also adjust the weights of the words within the answer span so that unimportant words (e.g.",
"\"and\" and \".\")",
"get lower weights in the final answer representation.",
"We believe that this refined representation is also good for the answer verification process.",
"Related Work Machine reading comprehension made rapid progress in recent years, especially for singlepassage MRC task, such as SQuAD (Rajpurkar et al., 2016) .",
"Mainstream studies (Seo et al., 2016; Wang and Jiang, 2016; Xiong et al., 2016) treat reading comprehension as extracting answer span from the given passage, which is usually achieved by predicting the start and end position of the answer.",
"We implement our boundary model similarly by employing the boundary-based pointer network (Wang and Jiang, 2016) .",
"Another inspiring work is from Wang et al.",
"(2017c) , where the authors propose to match the passage against itself so that the representation can aggregate evidence from the whole passage.",
"Our verification model adopts a similar idea.",
"However, we collect information across passages and our attention is based on the answer representation, which is much more efficient than attention over all passages.",
"For the model training, Xiong et al.",
"(2017) argues that the boundary loss encourages exact answers at the cost of penalizing overlapping answers.",
"Therefore they propose a mixed objective that incorporates rewards derived from word overlap.",
"Our joint training approach has a similar function.",
"By taking the content and verification loss into consideration, our model will give less loss for overlapping answers than those unmatched answers, and our loss function is totally differentiable.",
"Recently, we also see emerging interests in multi-passage MRC from both the academic (Dunn et al., 2017; Joshi et al., 2017) and industrial community (Nguyen et al., 2016; He et al., 2017) .",
"Early studies (Shen et al., 2017; Wang et al., 2017c) usually concat those passages and employ the same models designed for singlepassage MRC.",
"However, more and more latest studies start to design specific methods that can read multiple passages more effectively.",
"In the aspect of passage selection, Wang et al.",
"(2017a) introduced a pipelined approach that rank the passages first and then read the selected passages for answering questions.",
"Tan et al.",
"(2017) treats the passage ranking as an auxiliary task that can be trained jointly with the reading comprehension model.",
"Actually, the target of our answer verification is very similar to that of the passage selection, while we pay more attention to the answer content and the answer verification process.",
"Speaking of the answer verification, Wang et al.",
"(2017b) has a similar motivation to ours.",
"They attempt to aggregate the evidence from different passages and choose the final answer from n-best candidates.",
"However, they implement their idea as a separate reranking step after reading comprehension, while our answer verification is a component of the whole model that can be trained end-to-end.",
"Conclusion In this paper, we propose an end-to-end framework to tackle the multi-passage MRC task .",
"We creatively design three different modules in our model, which can find the answer boundary, model the answer content and conduct cross-passage answer verification respectively.",
"All these three modules can be trained with different forms of the answer labels and training them jointly can provide further improvement.",
"The experimental results demonstrate that our model outperforms the baseline models by a large margin and achieves the state-of-the-art performance on two challenging datasets, both of which are designed for MRC on real web data."
]
} | {
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.3",
"2.4",
"2.5",
"3",
"3.1",
"3.2",
"3.4",
"4.1",
"4.2",
"4.3",
"5",
"6"
],
"paper_header_content": [
"Introduction",
"Question and Passage Modeling",
"Answer Boundary Prediction",
"Answer Content Modeling",
"Cross-Passage Answer Verification",
"Joint Training and Prediction",
"Experiments",
"Datasets",
"Implementation Details",
"Results on DuReader",
"Ablation Study",
"Case Study",
"Necessity of the Content Model",
"Related Work",
"Conclusion"
]
} | GEM-SciDuet-train-28#paper-1035#slide-5 | Question and Passage Encoding | Question Passage | Passage 2 on Passage n .
Y v Y creo with Bi-LSTM: | Question Passage | Passage 2 on Passage n .
Y v Y creo with Bi-LSTM: | [] |
GEM-SciDuet-train-28#paper-1035#slide-6 | 1035 | Multi-Passage Machine Reading Comprehension with Cross-Passage Answer Verification | Machine reading comprehension (MRC) on real web data usually requires the machine to answer a question by analyzing multiple passages retrieved by search engine. Compared with MRC on a single passage, multi-passage MRC is more challenging, since we are likely to get multiple confusing answer candidates from different passages. To address this problem, we propose an end-to-end neural model that enables those answer candidates from different passages to verify each other based on their content representations. Specifically, we jointly train three modules that can predict the final answer based on three factors: the answer boundary, the answer content and the cross-passage answer verification. The experimental results show that our method outperforms the baseline by a large margin and achieves the state-of-the-art performance on the English MS-MARCO dataset and the Chinese DuReader dataset, both of which are designed for MRC in real-world settings. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231
],
"paper_content_text": [
"Introduction Machine reading comprehension (MRC), empowering computers with the ability to acquire knowledge and answer questions from textual data, is believed to be a crucial step in building a general intelligent agent (Chen et al., 2016) .",
"Recent years have seen rapid growth in the MRC community.",
"With the release of various datasets, the MRC task has evolved from the early cloze-style test (Hermann et al., 2015; Hill et al., 2015) to answer extraction from a single passage (Rajpurkar et al., 2016) and to the latest more complex question answering on web data (Nguyen et al., 2016; Dunn et al., 2017; He et al., 2017) .",
"Great efforts have also been made to develop models for these MRC tasks , especially for the answer extraction on single passage (Wang and Jiang, 2016; Seo et al., 2016; Pan et al., 2017) .",
"A significant milestone is that several MRC models have exceeded the performance of human annotators on the SQuAD dataset 1 (Rajpurkar et al., 2016 ).",
"However, this success on single Wikipedia passage is still not adequate, considering the ultimate goal of reading the whole web.",
"Therefore, several latest datasets (Nguyen et al., 2016; He et al., 2017; Dunn et al., 2017) attempt to design the MRC tasks in more realistic settings by involving search engines.",
"For each question, they use the search engine to retrieve multiple passages and the MRC models are required to read these passages in order to give the final answer.",
"One of the intrinsic challenges for such multipassage MRC is that since all the passages are question-related but usually independently written, it's probable that multiple confusing answer candidates (correct or incorrect) exist.",
"Table 1 shows an example from MS-MARCO.",
"We can see that all the answer candidates have semantic matching with the question while they are literally different and some of them are even incorrect.",
"As is shown by Jia and Liang (2017) , these confusing answer candidates could be quite difficult for MRC models to distinguish.",
"Therefore, special consideration is required for such multi-passage MRC problem.",
"In this paper, we propose to leverage the answer candidates from different passages to verify the final correct answer and rule out the noisy incorrect answers.",
"Our hypothesis is that the cor-Question: What is the difference between a mixed and pure culture?",
"Passages: [1] A culture is a society's total way of living and a society is a group that live in a defined territory and participate in common culture.",
"While the answer given is in essence true, societies originally form for the express purpose to enhance .",
".",
".",
"[2] .",
".",
".",
"There has been resurgence in the economic system known as capitalism during the past two decades.",
"4.",
"The mixed economy is a balance between socialism and capitalism.",
"As a result, some institutions are owned and maintained by .",
".",
".",
"[3] A pure culture is one in which only one kind of microbial species is found whereas in mixed culture two or more microbial species formed colonies.",
"Culture on the other hand, is the lifestyle that the people in the country .",
".",
".",
"[4] Best Answer: A pure culture comprises a single species or strains.",
"A mixed culture is taken from a source and may contain multiple strains or species.",
"A contaminated culture contains organisms that derived from some place .",
".",
".",
"[5] .",
".",
".",
"It will be at that time when we can truly obtain a pure culture.",
"A pure culture is a culture consisting of only one strain.",
"You can obtain a pure culture by picking out a small portion of the mixed culture .",
".",
".",
"[6] A pure culture is one in which only one kind of microbial species is found whereas in mixed culture two or more microbial species formed colonies.",
"A pure culture is a culture consisting of only one strain.",
".",
".",
".",
"· · · · · · Reference Answer: A pure culture is one in which only one kind of microbial species is found whereas in mixed culture two or more microbial species formed colonies.",
"rect answers could occur more frequently in those passages and usually share some commonalities, while incorrect answers are usually different from one another.",
"The example in Table 1 demonstrates this phenomenon.",
"We can see that the answer candidates extracted from the last four passages are all valid answers to the question and they are semantically similar to each other, while the answer candidates from the other two passages are incorrect and there is no supportive information from other passages.",
"As human beings usually compare the answer candidates from different sources to deduce the final answer, we hope that MRC model can also benefit from the cross-passage answer verification process.",
"The overall framework of our model is demonstrated in Figure 1 , which consists of three modules.",
"First, we follow the boundary-based MRC models (Seo et al., 2016; Wang and Jiang, 2016) to find an answer candidate for each passage by identifying the start and end position of the answer ( Figure 2) .",
"Second, we model the meanings of the answer candidates extracted from those passages and use the content scores to measure the quality of the candidates from a second perspective.",
"Third, we conduct the answer verification by enabling each answer candidate to attend to the other candidates based on their representations.",
"We hope that the answer candidates can collect supportive information from each other according to their semantic similarities and further decide whether each candidate is correct or not.",
"Therefore, the final answer is determined by three factors: the boundary, the content and the crosspassage answer verification.",
"The three steps are modeled using different modules, which can be jointly trained in our end-to-end framework.",
"We conduct extensive experiments on the MS-MARCO (Nguyen et al., 2016) and DuReader (He et al., 2017) datasets.",
"The results show that our answer verification MRC model outperforms the baseline models by a large margin and achieves the state-of-the-art performance on both datasets.",
"Figure 1 gives an overview of our multi-passage MRC model which is mainly composed of three modules including answer boundary prediction, answer content modeling and answer verification.",
"First of all, we need to model the question and passages.",
"Following Seo et al.",
"(2016) , we compute the question-aware representation for each passage (Section 2.1).",
"Based on this representation, we employ a Pointer Network (Vinyals et al., 2015) to predict the start and end position of the answer in the module of answer boundary prediction (Section 2.2).",
"At the same time, with the answer content model (Section 2.3), we estimate whether each word should be included in the answer and thus obtain the answer representations.",
"Next, in the answer verification module (Section 2.4), each answer candidate can attend to the other answer candidates to collect supportive information and we compute one score for each candidate Figure 1 : Overview of our method for multi-passage machine reading comprehension to indicate whether it is correct or not according to the verification.",
"The final answer is determined by not only the boundary but also the answer content and its verification score (Section 2.5).",
"Our Approach Question and Passage Modeling Given a question Q and a set of passages {P i } retrieved by search engines, our task is to find the best concise answer to the question.",
"First, we formally present the details of modeling the question and passages.",
"Encoding We first map each word into the vector space by concatenating its word embedding and sum of its character embeddings.",
"Then we employ bi-directional LSTMs (BiLSTM) to encode the question Q and passages {P i } as follows: u Q t = BiLSTM Q (u Q t−1 , [e Q Q-P Matching One essential step in MRC is to match the question with passages so that important information can be highlighted.",
"We use the Attention Flow Layer (Seo et al., 2016) to conduct the Q-P matching in two directions.",
"The similarity matrix S ∈ R |Q|×|P i | between the question and passage i is changed to a simpler version, where the similarity between the t th word in the question and the k th word in passage i is computed as: S t,k = u Q t · u P i k (3) Then the context-to-question attention and question-to-context attention is applied strictly following Seo et al.",
"(2016) to obtain the questionaware passage representation {ũ P i t }.",
"We do not give the details here due to space limitation.",
"Next, another BiLSTM is applied in order to fuse the contextual information and get the new representation for each word in the passage, which is regarded as the match output: v P i t = BiLSTM M (v P i t−1 ,ũ P i t ) (4) Based on the passage representations, we introduce the three main modules of our model.",
"Answer Boundary Prediction To extract the answer span from passages, mainstream studies try to locate the boundary of the answer, which is called boundary model.",
"Following (Wang and Jiang, 2016) , we employ Pointer Network (Vinyals et al., 2015) to compute the probability of each word to be the start or end position of the span: g t k = w a 1 tanh(W a 2 [v P k , h a t−1 ]) (5) α t k = exp(g t k )/ |P| j=1 exp(g t j ) (6) c t = |P| k=1 α t k v P k (7) h a t = LSTM(h a t−1 , c t ) (8) By utilizing the attention weights, the probability of the k th word in the passage to be the start and end position of the answer is obtained as α 1 k and α 2 k .",
"It should be noted that the pointer network is applied to the concatenation of all passages, which is denoted as P so that the probabilities are comparable across passages.",
"This boundary model can be trained by minimizing the negative log probabilities of the true start and end indices: L boundary = − 1 N N i=1 (log α 1 y 1 i + log α 2 y 2 i ) (9) where N is the number of samples in the dataset and y 1 i , y 2 i are the gold start and end positions.",
"Answer Content Modeling Previous work employs the boundary model to find the text span with the maximum boundary score as the final answer.",
"However, in our context, besides locating the answer candidates, we also need to model their meanings in order to conduct the verification.",
"An intuitive method is to compute the representation of the answer candidates separately after extracting them, but it could be hard to train such model end-to-end.",
"Here, we propose a novel method that can obtain the representation of the answer candidates based on probabilities.",
"Specifically, we change the output layer of the classic MRC model.",
"Besides predicting the boundary probabilities for the words in the passages, we also predict whether each word should be included in the content of the answer.",
"The content probability of the k th word is computed as: p c k = sigmoid(w c 1 ReLU(W c 2 v P i k )) (10) Training this content model is also quite intuitive.",
"We transform the boundary labels into a continuous segment, which means the words within the answer span will be labeled as 1 and other words will be labeled as 0.",
"In this way, we define the loss function as the averaged cross entropy: L content = − 1 N 1 |P| N i=1 |P | j=1 [y c k log p c k + (1 − y c k ) log(1 − p c k )] (11) The content probabilities provide another view to measure the quality of the answer in addition to the boundary.",
"Moreover, with these probabilities, we can represent the answer from passage i as a weighted sum of all the word embeddings in this passage: r A i = 1 |P i | |P i | k=1 p c k [e P i k , c P i k ] (12) Cross-Passage Answer Verification The boundary model and the content model focus on extracting and modeling the answer within a single passage respectively, with little consideration of the cross-passage information.",
"However, as is discussed in Section 1, there could be multiple answer candidates from different passages and some of them may mislead the MRC model to make an incorrect prediction.",
"It's necessary to aggregate the information from different passages and choose the best one from those candidates.",
"Therefore, we propose a method to enable the answer candidates to exchange information and verify each other through the cross-passage answer verification process.",
"Given the representation of the answer candidates from all passages {r A i }, each answer candidate then attends to other candidates to collect supportive information via attention mechanism: s i,j = 0, if i = j, r A i · r A j , otherwise (13) α i,j = exp(s i,j )/ n k=1 exp(s i,k ) (14) r A i = n j=1 α i,j r A j (15) Herer A i is the collected verification information from other passages based on the attention weights.",
"Then we pass it together with the original representation r A i to a fully connected layer: g v i = w v [r A i ,r A i , r A i r A i ] (16) We further normalize these scores over all passages to get the verification score for answer candidate A i : p v i = exp(g v i )/ n j=1 exp(g v j ) (17) In order to train this verification model, we take the answer from the gold passage as the gold answer.",
"And the loss function can be formulated as the negative log probability of the correct answer: L verif y = − 1 N N i=1 log p v y v i (18) where y v i is the index of the correct answer in all the answer candidates of the i th instance .",
"Joint Training and Prediction As is described above, we define three objectives for the reading comprehension model over multiple passages: 1. finding the boundary of the answer; 2. predicting whether each word should be included in the content; 3. selecting the best answer via cross-passage answer verification.",
"According to our design, these three tasks can share the same embedding, encoding and matching layers.",
"Therefore, we propose to train them together as multi-task learning (Ruder, 2017) .",
"The joint objective function is formulated as follows: L = L boundary + β 1 L content + β 2 L verif y (19) where β 1 and β 2 are two hyper-parameters that control the weights of those tasks.",
"When predicting the final answer, we take the boundary score, content score and verification score into consideration.",
"We first extract the answer candidate A i that has the maximum boundary score from each passage i.",
"This boundary score is computed as the product of the start and end probability of the answer span.",
"Then for each answer candidate A i , we average the content probabilities of all its words as the content score of A i .",
"And we can also predict the verification score for A i using the verification model.",
"Therefore, the final answer can be selected from all the answer candidates according to the product of these three scores.",
"Experiments To verify the effectiveness of our model on multipassage machine reading comprehension, we conduct experiments on the MS-MARCO (Nguyen et al., 2016) and DuReader (He et al., 2017) datasets.",
"Our method achieves the state-of-the-art performance on both datasets.",
"Datasets We choose the MS-MARCO and DuReader datasets to test our method, since both of them are One prerequisite for answer verification is that there should be multiple correct answers so that they can verify each other.",
"Both the MS-MARCO and DuReader datasets require the human annotators to generate multiple answers if possible.",
"Table 2 shows the proportion of questions that have multiple answers.",
"However, the same answer that occurs many times is treated as one single answer here.",
"Therefore, we also report the proportion of questions that have multiple answer spans to match with the human-generated answers.",
"A span is taken as valid if it can achieve F1 score larger than 0.7 compared with any reference answer.",
"From these statistics, we can see that the phenomenon of multiple answers is quite common for both MS-MARCO and DuReader.",
"These answers will provide strong signals for answer verification if we can leverage them properly.",
"Implementation Details For MS-MARCO, we preprocess the corpus with the reversible tokenizer from Stanford CoreNLP and we choose the span that achieves the highest ROUGE-L score with the reference answers as the gold span for training.",
"We employ the 300-D pre-trained Glove embeddings (Pennington et al., 2014) and keep it fixed during training.",
"The character embeddings are randomly initialized with its dimension as 30.",
"For DuReader, we follow the preprocessing described in He et al.",
"(2017) .",
"We tune the hyper-parameters according to the Model ROUGE-L BLEU-1 FastQA Ext (Weissenborn et al., 2017) 33.67 33.93 Prediction (Wang and Jiang, 2016) 37.33 40.72 ReasoNet (Shen et al., 2017) 38.81 39.86 R-Net (Wang et al., 2017c) 42.89 42.22 S-Net (Tan et al., 2017) 45 Two simple yet effective technologies are employed to improve the final performance on these two datasets respectively.",
"For MS-MARCO, approximately 8% questions have the answers as Yes or No, which usually cannot be solved by extractive approach (Tan et al., 2017) .",
"We address this problem by training a simple Yes/No classifier for those questions with certain patterns (e.g., starting with \"is\").",
"Concretely, we simply change the output layer of the basic boundary model so that it can predict whether the answer is \"Yes\" or \"No\".",
"For DuReader, the retrieved document usually contains a large number of paragraphs that cannot be fed into MRC models directly (He et al., 2017) .",
"The original paper employs a simple a simple heuristic strategy to select a representative paragraph for each document, while we train a paragraph ranking model for this.",
"We will demonstrate the effects of these two technologies later.",
"Table 3 shows the results of our system and other state-of-the-art models on the MS-MARCO test set.",
"We adopt the official evaluation metrics, including ROUGE-L (Lin, 2004) and BLEU-1 (Papineni et al., 2002) .",
"As we can see, for both metrics, our single model outperforms all the other competing models with an evident margin, which is extremely hard considering the near-human per- formance.",
"If we ensemble the models trained with different random seeds and hyper-parameters, the results can be further improved and outperform the ensemble model in Tan et al.",
"(2017) , especially in terms of the BLEU-1.",
"Results on MS-MARCO Results on DuReader The results of our model and several baseline systems on the test set of DuReader are shown in Table 4 .",
"The BiDAF and Match-LSTM models are provided as two baseline systems (He et al., 2017) .",
"Based on BiDAF, as is described in Section 3.2, we tried a new paragraph selection strategy by employing a paragraph ranking (PR) model.",
"We can see that this paragraph ranking can boost the BiDAF baseline significantly.",
"Finally, we implement our system based on this new strategy, and our system (single model) achieves further improvement by a large margin.",
"Question: What is the difference between a mixed and pure culture Scores Answer Candidates: Boundary Content Verification [1] A culture is a society's total way of living and a society is a group .",
".",
".",
"1.0 × 10 −2 1.0 × 10 −1 1.1 × 10 −1 [2] The mixed economy is a balance between socialism and capitalism.",
"1.0 × 10 −4 4.0 × 10 −2 3.2 × 10 −2 [3] A pure culture is one in which only one kind of microbial species is .",
".",
".",
"5.5 × 10 −3 7.7 × 10 −2 1.2 × 10 −1 [4] A pure culture comprises a single species or strains.",
"A mixed .",
".",
".",
"2.7 × 10 −3 8.1 × 10 −2 1.3 × 10 −1 [5] A pure culture is a culture consisting of only one strain.",
"5.8 × 10 −4 7.9 × 10 −2 5.1 × 10 −2 [6] A pure culture is one in which only one kind of microbial species .",
".",
".",
"5.8 × 10 −3 9.1 × 10 −2 2.7 × 10 −1 .",
".",
".",
".",
".",
".",
".",
".",
".",
".",
".",
".",
"Analysis and Discussion Ablation Study To get better insight into our system, we conduct in-depth ablation study on the development set of MS-MARCO, which is shown in Table 5 .",
"Following Tan et al.",
"(2017) , we mainly focus on the ROUGE-L score that is averaged case by case.",
"We first evaluate the answer verification by ablating the cross-passage verification model so that the verification loss and verification score will not be used during training and testing.",
"Then we remove the content model in order to test the necessity of modeling the content of the answer.",
"Since we don't have the content scores, we use the boundary probabilities instead to compute the answer representation for verification.",
"Next, to show the benefits of joint training, we train the boundary model separately from the other two models.",
"Finally, we remove the yes/no classification in order to show the real improvement of our end-toend model compared with the baseline method that predicts the answer with only the boundary model.",
"From Table 5 , we can see that the answer verification makes a great contribution to the overall improvement, which confirms our hypothesis that cross-passage answer verification is useful for the multi-passage MRC.",
"For the ablation of the content model, we analyze that it will not only affect the content score itself, but also violate the verification model since the content probabilities are necessary for the answer representation, which will be further analyzed in Section 4.3.",
"Another discovery is that jointly training the three models can provide great benefits, which shows that the three tasks are actually closely related and can boost each other with shared representations at bottom layers.",
"At last, comparing our method with the baseline, we achieve an improvement of nearly 3 points without the yes/no classification.",
"This significant improvement proves the effectiveness of our approach.",
"Case Study To demonstrate how each module of our model takes effect when predicting the final answer, we conduct a case study in Table 6 with the same example that we discussed in Section 1.",
"For each answer candidate, we list three scores predicted by the boundary model, content model and verification model respectively.",
"On the one hand, we can see that these three scores generally have some relevance.",
"For example, the second candidate is given lowest scores by all the three models.",
"We analyze that this is because the models share the same encoding and matching layers at bottom level and this relevance guarantees that the content and verification models will not violate the boundary model too much.",
"On the other hand, we also see that the verification score can really make a difference here when the boundary model makes an incorrect decision among the confusing answer candidates ([1], [3], [4], [6] ).",
"Besides, as we expected, the verification model tends to give higher scores for those answers that have semantic commonality with each other ([3] , [4], [6] ), which are all valid answers in this case.",
"By multiplying the three scores, our model finally predicts the answer correctly.",
"Necessity of the Content Model In our model, we compute the answer representation based on the content probabilities predicted by a separate content model instead of directly using the boundary probabilities.",
"We argue that this content model is necessary for our answer verification process.",
"Figure 2 plots the predicted content probabilities as well as the boundary probabilities The noun charge unit has 1 sense : 1 .",
"a measure of the quantity of electricity -LRB-determined by the amount of an electric current and the time for which it flows -RRB-.",
"familiarity info : charge unit used as a noun is very rare .",
"start probability end probability content probability Figure 2 : The boundary probabilities and content probabilities for the words in a passage for a passage.",
"We can see that the boundary and content probabilities capture different aspects of the answer.",
"Since answer candidates usually have similar boundary words, if we compute the answer representation based on the boundary probabilities, it's difficult to model the real difference among different answer candidates.",
"On the contrary, with the content probabilities, we pay more attention to the content part of the answer, which can provide more distinguishable information for verifying the correct answer.",
"Furthermore, the content probabilities can also adjust the weights of the words within the answer span so that unimportant words (e.g.",
"\"and\" and \".\")",
"get lower weights in the final answer representation.",
"We believe that this refined representation is also good for the answer verification process.",
"Related Work Machine reading comprehension made rapid progress in recent years, especially for singlepassage MRC task, such as SQuAD (Rajpurkar et al., 2016) .",
"Mainstream studies (Seo et al., 2016; Wang and Jiang, 2016; Xiong et al., 2016) treat reading comprehension as extracting answer span from the given passage, which is usually achieved by predicting the start and end position of the answer.",
"We implement our boundary model similarly by employing the boundary-based pointer network (Wang and Jiang, 2016) .",
"Another inspiring work is from Wang et al.",
"(2017c) , where the authors propose to match the passage against itself so that the representation can aggregate evidence from the whole passage.",
"Our verification model adopts a similar idea.",
"However, we collect information across passages and our attention is based on the answer representation, which is much more efficient than attention over all passages.",
"For the model training, Xiong et al.",
"(2017) argues that the boundary loss encourages exact answers at the cost of penalizing overlapping answers.",
"Therefore they propose a mixed objective that incorporates rewards derived from word overlap.",
"Our joint training approach has a similar function.",
"By taking the content and verification loss into consideration, our model will give less loss for overlapping answers than those unmatched answers, and our loss function is totally differentiable.",
"Recently, we also see emerging interests in multi-passage MRC from both the academic (Dunn et al., 2017; Joshi et al., 2017) and industrial community (Nguyen et al., 2016; He et al., 2017) .",
"Early studies (Shen et al., 2017; Wang et al., 2017c) usually concat those passages and employ the same models designed for singlepassage MRC.",
"However, more and more latest studies start to design specific methods that can read multiple passages more effectively.",
"In the aspect of passage selection, Wang et al.",
"(2017a) introduced a pipelined approach that rank the passages first and then read the selected passages for answering questions.",
"Tan et al.",
"(2017) treats the passage ranking as an auxiliary task that can be trained jointly with the reading comprehension model.",
"Actually, the target of our answer verification is very similar to that of the passage selection, while we pay more attention to the answer content and the answer verification process.",
"Speaking of the answer verification, Wang et al.",
"(2017b) has a similar motivation to ours.",
"They attempt to aggregate the evidence from different passages and choose the final answer from n-best candidates.",
"However, they implement their idea as a separate reranking step after reading comprehension, while our answer verification is a component of the whole model that can be trained end-to-end.",
"Conclusion In this paper, we propose an end-to-end framework to tackle the multi-passage MRC task .",
"We creatively design three different modules in our model, which can find the answer boundary, model the answer content and conduct cross-passage answer verification respectively.",
"All these three modules can be trained with different forms of the answer labels and training them jointly can provide further improvement.",
"The experimental results demonstrate that our model outperforms the baseline models by a large margin and achieves the state-of-the-art performance on two challenging datasets, both of which are designed for MRC on real web data."
]
} | {
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.3",
"2.4",
"2.5",
"3",
"3.1",
"3.2",
"3.4",
"4.1",
"4.2",
"4.3",
"5",
"6"
],
"paper_header_content": [
"Introduction",
"Question and Passage Modeling",
"Answer Boundary Prediction",
"Answer Content Modeling",
"Cross-Passage Answer Verification",
"Joint Training and Prediction",
"Experiments",
"Datasets",
"Implementation Details",
"Results on DuReader",
"Ablation Study",
"Case Study",
"Necessity of the Content Model",
"Related Work",
"Conclusion"
]
} | GEM-SciDuet-train-28#paper-1035#slide-6 | Question Passage Matching | Question Passage 1 Passage 2 os Passage n
uP uP: UPa * Bi-directional Attention Flow | Question Passage 1 Passage 2 os Passage n
uP uP: UPa * Bi-directional Attention Flow | [] |
GEM-SciDuet-train-28#paper-1035#slide-7 | 1035 | Multi-Passage Machine Reading Comprehension with Cross-Passage Answer Verification | Machine reading comprehension (MRC) on real web data usually requires the machine to answer a question by analyzing multiple passages retrieved by search engine. Compared with MRC on a single passage, multi-passage MRC is more challenging, since we are likely to get multiple confusing answer candidates from different passages. To address this problem, we propose an end-to-end neural model that enables those answer candidates from different passages to verify each other based on their content representations. Specifically, we jointly train three modules that can predict the final answer based on three factors: the answer boundary, the answer content and the cross-passage answer verification. The experimental results show that our method outperforms the baseline by a large margin and achieves the state-of-the-art performance on the English MS-MARCO dataset and the Chinese DuReader dataset, both of which are designed for MRC in real-world settings. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231
],
"paper_content_text": [
"Introduction Machine reading comprehension (MRC), empowering computers with the ability to acquire knowledge and answer questions from textual data, is believed to be a crucial step in building a general intelligent agent (Chen et al., 2016) .",
"Recent years have seen rapid growth in the MRC community.",
"With the release of various datasets, the MRC task has evolved from the early cloze-style test (Hermann et al., 2015; Hill et al., 2015) to answer extraction from a single passage (Rajpurkar et al., 2016) and to the latest more complex question answering on web data (Nguyen et al., 2016; Dunn et al., 2017; He et al., 2017) .",
"Great efforts have also been made to develop models for these MRC tasks , especially for the answer extraction on single passage (Wang and Jiang, 2016; Seo et al., 2016; Pan et al., 2017) .",
"A significant milestone is that several MRC models have exceeded the performance of human annotators on the SQuAD dataset 1 (Rajpurkar et al., 2016 ).",
"However, this success on single Wikipedia passage is still not adequate, considering the ultimate goal of reading the whole web.",
"Therefore, several latest datasets (Nguyen et al., 2016; He et al., 2017; Dunn et al., 2017) attempt to design the MRC tasks in more realistic settings by involving search engines.",
"For each question, they use the search engine to retrieve multiple passages and the MRC models are required to read these passages in order to give the final answer.",
"One of the intrinsic challenges for such multipassage MRC is that since all the passages are question-related but usually independently written, it's probable that multiple confusing answer candidates (correct or incorrect) exist.",
"Table 1 shows an example from MS-MARCO.",
"We can see that all the answer candidates have semantic matching with the question while they are literally different and some of them are even incorrect.",
"As is shown by Jia and Liang (2017) , these confusing answer candidates could be quite difficult for MRC models to distinguish.",
"Therefore, special consideration is required for such multi-passage MRC problem.",
"In this paper, we propose to leverage the answer candidates from different passages to verify the final correct answer and rule out the noisy incorrect answers.",
"Our hypothesis is that the cor-Question: What is the difference between a mixed and pure culture?",
"Passages: [1] A culture is a society's total way of living and a society is a group that live in a defined territory and participate in common culture.",
"While the answer given is in essence true, societies originally form for the express purpose to enhance .",
".",
".",
"[2] .",
".",
".",
"There has been resurgence in the economic system known as capitalism during the past two decades.",
"4.",
"The mixed economy is a balance between socialism and capitalism.",
"As a result, some institutions are owned and maintained by .",
".",
".",
"[3] A pure culture is one in which only one kind of microbial species is found whereas in mixed culture two or more microbial species formed colonies.",
"Culture on the other hand, is the lifestyle that the people in the country .",
".",
".",
"[4] Best Answer: A pure culture comprises a single species or strains.",
"A mixed culture is taken from a source and may contain multiple strains or species.",
"A contaminated culture contains organisms that derived from some place .",
".",
".",
"[5] .",
".",
".",
"It will be at that time when we can truly obtain a pure culture.",
"A pure culture is a culture consisting of only one strain.",
"You can obtain a pure culture by picking out a small portion of the mixed culture .",
".",
".",
"[6] A pure culture is one in which only one kind of microbial species is found whereas in mixed culture two or more microbial species formed colonies.",
"A pure culture is a culture consisting of only one strain.",
".",
".",
".",
"· · · · · · Reference Answer: A pure culture is one in which only one kind of microbial species is found whereas in mixed culture two or more microbial species formed colonies.",
"rect answers could occur more frequently in those passages and usually share some commonalities, while incorrect answers are usually different from one another.",
"The example in Table 1 demonstrates this phenomenon.",
"We can see that the answer candidates extracted from the last four passages are all valid answers to the question and they are semantically similar to each other, while the answer candidates from the other two passages are incorrect and there is no supportive information from other passages.",
"As human beings usually compare the answer candidates from different sources to deduce the final answer, we hope that MRC model can also benefit from the cross-passage answer verification process.",
"The overall framework of our model is demonstrated in Figure 1 , which consists of three modules.",
"First, we follow the boundary-based MRC models (Seo et al., 2016; Wang and Jiang, 2016) to find an answer candidate for each passage by identifying the start and end position of the answer ( Figure 2) .",
"Second, we model the meanings of the answer candidates extracted from those passages and use the content scores to measure the quality of the candidates from a second perspective.",
"Third, we conduct the answer verification by enabling each answer candidate to attend to the other candidates based on their representations.",
"We hope that the answer candidates can collect supportive information from each other according to their semantic similarities and further decide whether each candidate is correct or not.",
"Therefore, the final answer is determined by three factors: the boundary, the content and the crosspassage answer verification.",
"The three steps are modeled using different modules, which can be jointly trained in our end-to-end framework.",
"We conduct extensive experiments on the MS-MARCO (Nguyen et al., 2016) and DuReader (He et al., 2017) datasets.",
"The results show that our answer verification MRC model outperforms the baseline models by a large margin and achieves the state-of-the-art performance on both datasets.",
"Figure 1 gives an overview of our multi-passage MRC model which is mainly composed of three modules including answer boundary prediction, answer content modeling and answer verification.",
"First of all, we need to model the question and passages.",
"Following Seo et al.",
"(2016) , we compute the question-aware representation for each passage (Section 2.1).",
"Based on this representation, we employ a Pointer Network (Vinyals et al., 2015) to predict the start and end position of the answer in the module of answer boundary prediction (Section 2.2).",
"At the same time, with the answer content model (Section 2.3), we estimate whether each word should be included in the answer and thus obtain the answer representations.",
"Next, in the answer verification module (Section 2.4), each answer candidate can attend to the other answer candidates to collect supportive information and we compute one score for each candidate Figure 1 : Overview of our method for multi-passage machine reading comprehension to indicate whether it is correct or not according to the verification.",
"The final answer is determined by not only the boundary but also the answer content and its verification score (Section 2.5).",
"Our Approach Question and Passage Modeling Given a question Q and a set of passages {P i } retrieved by search engines, our task is to find the best concise answer to the question.",
"First, we formally present the details of modeling the question and passages.",
"Encoding We first map each word into the vector space by concatenating its word embedding and sum of its character embeddings.",
"Then we employ bi-directional LSTMs (BiLSTM) to encode the question Q and passages {P i } as follows: u Q t = BiLSTM Q (u Q t−1 , [e Q Q-P Matching One essential step in MRC is to match the question with passages so that important information can be highlighted.",
"We use the Attention Flow Layer (Seo et al., 2016) to conduct the Q-P matching in two directions.",
"The similarity matrix S ∈ R |Q|×|P i | between the question and passage i is changed to a simpler version, where the similarity between the t th word in the question and the k th word in passage i is computed as: S t,k = u Q t · u P i k (3) Then the context-to-question attention and question-to-context attention is applied strictly following Seo et al.",
"(2016) to obtain the questionaware passage representation {ũ P i t }.",
"We do not give the details here due to space limitation.",
"Next, another BiLSTM is applied in order to fuse the contextual information and get the new representation for each word in the passage, which is regarded as the match output: v P i t = BiLSTM M (v P i t−1 ,ũ P i t ) (4) Based on the passage representations, we introduce the three main modules of our model.",
"Answer Boundary Prediction To extract the answer span from passages, mainstream studies try to locate the boundary of the answer, which is called boundary model.",
"Following (Wang and Jiang, 2016) , we employ Pointer Network (Vinyals et al., 2015) to compute the probability of each word to be the start or end position of the span: g t k = w a 1 tanh(W a 2 [v P k , h a t−1 ]) (5) α t k = exp(g t k )/ |P| j=1 exp(g t j ) (6) c t = |P| k=1 α t k v P k (7) h a t = LSTM(h a t−1 , c t ) (8) By utilizing the attention weights, the probability of the k th word in the passage to be the start and end position of the answer is obtained as α 1 k and α 2 k .",
"It should be noted that the pointer network is applied to the concatenation of all passages, which is denoted as P so that the probabilities are comparable across passages.",
"This boundary model can be trained by minimizing the negative log probabilities of the true start and end indices: L boundary = − 1 N N i=1 (log α 1 y 1 i + log α 2 y 2 i ) (9) where N is the number of samples in the dataset and y 1 i , y 2 i are the gold start and end positions.",
"Answer Content Modeling Previous work employs the boundary model to find the text span with the maximum boundary score as the final answer.",
"However, in our context, besides locating the answer candidates, we also need to model their meanings in order to conduct the verification.",
"An intuitive method is to compute the representation of the answer candidates separately after extracting them, but it could be hard to train such model end-to-end.",
"Here, we propose a novel method that can obtain the representation of the answer candidates based on probabilities.",
"Specifically, we change the output layer of the classic MRC model.",
"Besides predicting the boundary probabilities for the words in the passages, we also predict whether each word should be included in the content of the answer.",
"The content probability of the k th word is computed as: p c k = sigmoid(w c 1 ReLU(W c 2 v P i k )) (10) Training this content model is also quite intuitive.",
"We transform the boundary labels into a continuous segment, which means the words within the answer span will be labeled as 1 and other words will be labeled as 0.",
"In this way, we define the loss function as the averaged cross entropy: L content = − 1 N 1 |P| N i=1 |P | j=1 [y c k log p c k + (1 − y c k ) log(1 − p c k )] (11) The content probabilities provide another view to measure the quality of the answer in addition to the boundary.",
"Moreover, with these probabilities, we can represent the answer from passage i as a weighted sum of all the word embeddings in this passage: r A i = 1 |P i | |P i | k=1 p c k [e P i k , c P i k ] (12) Cross-Passage Answer Verification The boundary model and the content model focus on extracting and modeling the answer within a single passage respectively, with little consideration of the cross-passage information.",
"However, as is discussed in Section 1, there could be multiple answer candidates from different passages and some of them may mislead the MRC model to make an incorrect prediction.",
"It's necessary to aggregate the information from different passages and choose the best one from those candidates.",
"Therefore, we propose a method to enable the answer candidates to exchange information and verify each other through the cross-passage answer verification process.",
"Given the representation of the answer candidates from all passages {r A i }, each answer candidate then attends to other candidates to collect supportive information via attention mechanism: s i,j = 0, if i = j, r A i · r A j , otherwise (13) α i,j = exp(s i,j )/ n k=1 exp(s i,k ) (14) r A i = n j=1 α i,j r A j (15) Herer A i is the collected verification information from other passages based on the attention weights.",
"Then we pass it together with the original representation r A i to a fully connected layer: g v i = w v [r A i ,r A i , r A i r A i ] (16) We further normalize these scores over all passages to get the verification score for answer candidate A i : p v i = exp(g v i )/ n j=1 exp(g v j ) (17) In order to train this verification model, we take the answer from the gold passage as the gold answer.",
"And the loss function can be formulated as the negative log probability of the correct answer: L verif y = − 1 N N i=1 log p v y v i (18) where y v i is the index of the correct answer in all the answer candidates of the i th instance .",
"Joint Training and Prediction As is described above, we define three objectives for the reading comprehension model over multiple passages: 1. finding the boundary of the answer; 2. predicting whether each word should be included in the content; 3. selecting the best answer via cross-passage answer verification.",
"According to our design, these three tasks can share the same embedding, encoding and matching layers.",
"Therefore, we propose to train them together as multi-task learning (Ruder, 2017) .",
"The joint objective function is formulated as follows: L = L boundary + β 1 L content + β 2 L verif y (19) where β 1 and β 2 are two hyper-parameters that control the weights of those tasks.",
"When predicting the final answer, we take the boundary score, content score and verification score into consideration.",
"We first extract the answer candidate A i that has the maximum boundary score from each passage i.",
"This boundary score is computed as the product of the start and end probability of the answer span.",
"Then for each answer candidate A i , we average the content probabilities of all its words as the content score of A i .",
"And we can also predict the verification score for A i using the verification model.",
"Therefore, the final answer can be selected from all the answer candidates according to the product of these three scores.",
"Experiments To verify the effectiveness of our model on multipassage machine reading comprehension, we conduct experiments on the MS-MARCO (Nguyen et al., 2016) and DuReader (He et al., 2017) datasets.",
"Our method achieves the state-of-the-art performance on both datasets.",
"Datasets We choose the MS-MARCO and DuReader datasets to test our method, since both of them are One prerequisite for answer verification is that there should be multiple correct answers so that they can verify each other.",
"Both the MS-MARCO and DuReader datasets require the human annotators to generate multiple answers if possible.",
"Table 2 shows the proportion of questions that have multiple answers.",
"However, the same answer that occurs many times is treated as one single answer here.",
"Therefore, we also report the proportion of questions that have multiple answer spans to match with the human-generated answers.",
"A span is taken as valid if it can achieve F1 score larger than 0.7 compared with any reference answer.",
"From these statistics, we can see that the phenomenon of multiple answers is quite common for both MS-MARCO and DuReader.",
"These answers will provide strong signals for answer verification if we can leverage them properly.",
"Implementation Details For MS-MARCO, we preprocess the corpus with the reversible tokenizer from Stanford CoreNLP and we choose the span that achieves the highest ROUGE-L score with the reference answers as the gold span for training.",
"We employ the 300-D pre-trained Glove embeddings (Pennington et al., 2014) and keep it fixed during training.",
"The character embeddings are randomly initialized with its dimension as 30.",
"For DuReader, we follow the preprocessing described in He et al.",
"(2017) .",
"We tune the hyper-parameters according to the Model ROUGE-L BLEU-1 FastQA Ext (Weissenborn et al., 2017) 33.67 33.93 Prediction (Wang and Jiang, 2016) 37.33 40.72 ReasoNet (Shen et al., 2017) 38.81 39.86 R-Net (Wang et al., 2017c) 42.89 42.22 S-Net (Tan et al., 2017) 45 Two simple yet effective technologies are employed to improve the final performance on these two datasets respectively.",
"For MS-MARCO, approximately 8% questions have the answers as Yes or No, which usually cannot be solved by extractive approach (Tan et al., 2017) .",
"We address this problem by training a simple Yes/No classifier for those questions with certain patterns (e.g., starting with \"is\").",
"Concretely, we simply change the output layer of the basic boundary model so that it can predict whether the answer is \"Yes\" or \"No\".",
"For DuReader, the retrieved document usually contains a large number of paragraphs that cannot be fed into MRC models directly (He et al., 2017) .",
"The original paper employs a simple a simple heuristic strategy to select a representative paragraph for each document, while we train a paragraph ranking model for this.",
"We will demonstrate the effects of these two technologies later.",
"Table 3 shows the results of our system and other state-of-the-art models on the MS-MARCO test set.",
"We adopt the official evaluation metrics, including ROUGE-L (Lin, 2004) and BLEU-1 (Papineni et al., 2002) .",
"As we can see, for both metrics, our single model outperforms all the other competing models with an evident margin, which is extremely hard considering the near-human per- formance.",
"If we ensemble the models trained with different random seeds and hyper-parameters, the results can be further improved and outperform the ensemble model in Tan et al.",
"(2017) , especially in terms of the BLEU-1.",
"Results on MS-MARCO Results on DuReader The results of our model and several baseline systems on the test set of DuReader are shown in Table 4 .",
"The BiDAF and Match-LSTM models are provided as two baseline systems (He et al., 2017) .",
"Based on BiDAF, as is described in Section 3.2, we tried a new paragraph selection strategy by employing a paragraph ranking (PR) model.",
"We can see that this paragraph ranking can boost the BiDAF baseline significantly.",
"Finally, we implement our system based on this new strategy, and our system (single model) achieves further improvement by a large margin.",
"Question: What is the difference between a mixed and pure culture Scores Answer Candidates: Boundary Content Verification [1] A culture is a society's total way of living and a society is a group .",
".",
".",
"1.0 × 10 −2 1.0 × 10 −1 1.1 × 10 −1 [2] The mixed economy is a balance between socialism and capitalism.",
"1.0 × 10 −4 4.0 × 10 −2 3.2 × 10 −2 [3] A pure culture is one in which only one kind of microbial species is .",
".",
".",
"5.5 × 10 −3 7.7 × 10 −2 1.2 × 10 −1 [4] A pure culture comprises a single species or strains.",
"A mixed .",
".",
".",
"2.7 × 10 −3 8.1 × 10 −2 1.3 × 10 −1 [5] A pure culture is a culture consisting of only one strain.",
"5.8 × 10 −4 7.9 × 10 −2 5.1 × 10 −2 [6] A pure culture is one in which only one kind of microbial species .",
".",
".",
"5.8 × 10 −3 9.1 × 10 −2 2.7 × 10 −1 .",
".",
".",
".",
".",
".",
".",
".",
".",
".",
".",
".",
"Analysis and Discussion Ablation Study To get better insight into our system, we conduct in-depth ablation study on the development set of MS-MARCO, which is shown in Table 5 .",
"Following Tan et al.",
"(2017) , we mainly focus on the ROUGE-L score that is averaged case by case.",
"We first evaluate the answer verification by ablating the cross-passage verification model so that the verification loss and verification score will not be used during training and testing.",
"Then we remove the content model in order to test the necessity of modeling the content of the answer.",
"Since we don't have the content scores, we use the boundary probabilities instead to compute the answer representation for verification.",
"Next, to show the benefits of joint training, we train the boundary model separately from the other two models.",
"Finally, we remove the yes/no classification in order to show the real improvement of our end-toend model compared with the baseline method that predicts the answer with only the boundary model.",
"From Table 5 , we can see that the answer verification makes a great contribution to the overall improvement, which confirms our hypothesis that cross-passage answer verification is useful for the multi-passage MRC.",
"For the ablation of the content model, we analyze that it will not only affect the content score itself, but also violate the verification model since the content probabilities are necessary for the answer representation, which will be further analyzed in Section 4.3.",
"Another discovery is that jointly training the three models can provide great benefits, which shows that the three tasks are actually closely related and can boost each other with shared representations at bottom layers.",
"At last, comparing our method with the baseline, we achieve an improvement of nearly 3 points without the yes/no classification.",
"This significant improvement proves the effectiveness of our approach.",
"Case Study To demonstrate how each module of our model takes effect when predicting the final answer, we conduct a case study in Table 6 with the same example that we discussed in Section 1.",
"For each answer candidate, we list three scores predicted by the boundary model, content model and verification model respectively.",
"On the one hand, we can see that these three scores generally have some relevance.",
"For example, the second candidate is given lowest scores by all the three models.",
"We analyze that this is because the models share the same encoding and matching layers at bottom level and this relevance guarantees that the content and verification models will not violate the boundary model too much.",
"On the other hand, we also see that the verification score can really make a difference here when the boundary model makes an incorrect decision among the confusing answer candidates ([1], [3], [4], [6] ).",
"Besides, as we expected, the verification model tends to give higher scores for those answers that have semantic commonality with each other ([3] , [4], [6] ), which are all valid answers in this case.",
"By multiplying the three scores, our model finally predicts the answer correctly.",
"Necessity of the Content Model In our model, we compute the answer representation based on the content probabilities predicted by a separate content model instead of directly using the boundary probabilities.",
"We argue that this content model is necessary for our answer verification process.",
"Figure 2 plots the predicted content probabilities as well as the boundary probabilities The noun charge unit has 1 sense : 1 .",
"a measure of the quantity of electricity -LRB-determined by the amount of an electric current and the time for which it flows -RRB-.",
"familiarity info : charge unit used as a noun is very rare .",
"start probability end probability content probability Figure 2 : The boundary probabilities and content probabilities for the words in a passage for a passage.",
"We can see that the boundary and content probabilities capture different aspects of the answer.",
"Since answer candidates usually have similar boundary words, if we compute the answer representation based on the boundary probabilities, it's difficult to model the real difference among different answer candidates.",
"On the contrary, with the content probabilities, we pay more attention to the content part of the answer, which can provide more distinguishable information for verifying the correct answer.",
"Furthermore, the content probabilities can also adjust the weights of the words within the answer span so that unimportant words (e.g.",
"\"and\" and \".\")",
"get lower weights in the final answer representation.",
"We believe that this refined representation is also good for the answer verification process.",
"Related Work Machine reading comprehension made rapid progress in recent years, especially for singlepassage MRC task, such as SQuAD (Rajpurkar et al., 2016) .",
"Mainstream studies (Seo et al., 2016; Wang and Jiang, 2016; Xiong et al., 2016) treat reading comprehension as extracting answer span from the given passage, which is usually achieved by predicting the start and end position of the answer.",
"We implement our boundary model similarly by employing the boundary-based pointer network (Wang and Jiang, 2016) .",
"Another inspiring work is from Wang et al.",
"(2017c) , where the authors propose to match the passage against itself so that the representation can aggregate evidence from the whole passage.",
"Our verification model adopts a similar idea.",
"However, we collect information across passages and our attention is based on the answer representation, which is much more efficient than attention over all passages.",
"For the model training, Xiong et al.",
"(2017) argues that the boundary loss encourages exact answers at the cost of penalizing overlapping answers.",
"Therefore they propose a mixed objective that incorporates rewards derived from word overlap.",
"Our joint training approach has a similar function.",
"By taking the content and verification loss into consideration, our model will give less loss for overlapping answers than those unmatched answers, and our loss function is totally differentiable.",
"Recently, we also see emerging interests in multi-passage MRC from both the academic (Dunn et al., 2017; Joshi et al., 2017) and industrial community (Nguyen et al., 2016; He et al., 2017) .",
"Early studies (Shen et al., 2017; Wang et al., 2017c) usually concat those passages and employ the same models designed for singlepassage MRC.",
"However, more and more latest studies start to design specific methods that can read multiple passages more effectively.",
"In the aspect of passage selection, Wang et al.",
"(2017a) introduced a pipelined approach that rank the passages first and then read the selected passages for answering questions.",
"Tan et al.",
"(2017) treats the passage ranking as an auxiliary task that can be trained jointly with the reading comprehension model.",
"Actually, the target of our answer verification is very similar to that of the passage selection, while we pay more attention to the answer content and the answer verification process.",
"Speaking of the answer verification, Wang et al.",
"(2017b) has a similar motivation to ours.",
"They attempt to aggregate the evidence from different passages and choose the final answer from n-best candidates.",
"However, they implement their idea as a separate reranking step after reading comprehension, while our answer verification is a component of the whole model that can be trained end-to-end.",
"Conclusion In this paper, we propose an end-to-end framework to tackle the multi-passage MRC task .",
"We creatively design three different modules in our model, which can find the answer boundary, model the answer content and conduct cross-passage answer verification respectively.",
"All these three modules can be trained with different forms of the answer labels and training them jointly can provide further improvement.",
"The experimental results demonstrate that our model outperforms the baseline models by a large margin and achieves the state-of-the-art performance on two challenging datasets, both of which are designed for MRC on real web data."
]
} | {
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.3",
"2.4",
"2.5",
"3",
"3.1",
"3.2",
"3.4",
"4.1",
"4.2",
"4.3",
"5",
"6"
],
"paper_header_content": [
"Introduction",
"Question and Passage Modeling",
"Answer Boundary Prediction",
"Answer Content Modeling",
"Cross-Passage Answer Verification",
"Joint Training and Prediction",
"Experiments",
"Datasets",
"Implementation Details",
"Results on DuReader",
"Ablation Study",
"Case Study",
"Necessity of the Content Model",
"Related Work",
"Conclusion"
]
} | GEM-SciDuet-train-28#paper-1035#slide-7 | Answer Boundary Prediction | Question Passage 1 Passage 2 os Passage n
Start and end pointer:
Cc = Mit ative | Question Passage 1 Passage 2 os Passage n
Start and end pointer:
Cc = Mit ative | [] |
GEM-SciDuet-train-28#paper-1035#slide-8 | 1035 | Multi-Passage Machine Reading Comprehension with Cross-Passage Answer Verification | Machine reading comprehension (MRC) on real web data usually requires the machine to answer a question by analyzing multiple passages retrieved by search engine. Compared with MRC on a single passage, multi-passage MRC is more challenging, since we are likely to get multiple confusing answer candidates from different passages. To address this problem, we propose an end-to-end neural model that enables those answer candidates from different passages to verify each other based on their content representations. Specifically, we jointly train three modules that can predict the final answer based on three factors: the answer boundary, the answer content and the cross-passage answer verification. The experimental results show that our method outperforms the baseline by a large margin and achieves the state-of-the-art performance on the English MS-MARCO dataset and the Chinese DuReader dataset, both of which are designed for MRC in real-world settings. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231
],
"paper_content_text": [
"Introduction Machine reading comprehension (MRC), empowering computers with the ability to acquire knowledge and answer questions from textual data, is believed to be a crucial step in building a general intelligent agent (Chen et al., 2016) .",
"Recent years have seen rapid growth in the MRC community.",
"With the release of various datasets, the MRC task has evolved from the early cloze-style test (Hermann et al., 2015; Hill et al., 2015) to answer extraction from a single passage (Rajpurkar et al., 2016) and to the latest more complex question answering on web data (Nguyen et al., 2016; Dunn et al., 2017; He et al., 2017) .",
"Great efforts have also been made to develop models for these MRC tasks , especially for the answer extraction on single passage (Wang and Jiang, 2016; Seo et al., 2016; Pan et al., 2017) .",
"A significant milestone is that several MRC models have exceeded the performance of human annotators on the SQuAD dataset 1 (Rajpurkar et al., 2016 ).",
"However, this success on single Wikipedia passage is still not adequate, considering the ultimate goal of reading the whole web.",
"Therefore, several latest datasets (Nguyen et al., 2016; He et al., 2017; Dunn et al., 2017) attempt to design the MRC tasks in more realistic settings by involving search engines.",
"For each question, they use the search engine to retrieve multiple passages and the MRC models are required to read these passages in order to give the final answer.",
"One of the intrinsic challenges for such multipassage MRC is that since all the passages are question-related but usually independently written, it's probable that multiple confusing answer candidates (correct or incorrect) exist.",
"Table 1 shows an example from MS-MARCO.",
"We can see that all the answer candidates have semantic matching with the question while they are literally different and some of them are even incorrect.",
"As is shown by Jia and Liang (2017) , these confusing answer candidates could be quite difficult for MRC models to distinguish.",
"Therefore, special consideration is required for such multi-passage MRC problem.",
"In this paper, we propose to leverage the answer candidates from different passages to verify the final correct answer and rule out the noisy incorrect answers.",
"Our hypothesis is that the cor-Question: What is the difference between a mixed and pure culture?",
"Passages: [1] A culture is a society's total way of living and a society is a group that live in a defined territory and participate in common culture.",
"While the answer given is in essence true, societies originally form for the express purpose to enhance .",
".",
".",
"[2] .",
".",
".",
"There has been resurgence in the economic system known as capitalism during the past two decades.",
"4.",
"The mixed economy is a balance between socialism and capitalism.",
"As a result, some institutions are owned and maintained by .",
".",
".",
"[3] A pure culture is one in which only one kind of microbial species is found whereas in mixed culture two or more microbial species formed colonies.",
"Culture on the other hand, is the lifestyle that the people in the country .",
".",
".",
"[4] Best Answer: A pure culture comprises a single species or strains.",
"A mixed culture is taken from a source and may contain multiple strains or species.",
"A contaminated culture contains organisms that derived from some place .",
".",
".",
"[5] .",
".",
".",
"It will be at that time when we can truly obtain a pure culture.",
"A pure culture is a culture consisting of only one strain.",
"You can obtain a pure culture by picking out a small portion of the mixed culture .",
".",
".",
"[6] A pure culture is one in which only one kind of microbial species is found whereas in mixed culture two or more microbial species formed colonies.",
"A pure culture is a culture consisting of only one strain.",
".",
".",
".",
"· · · · · · Reference Answer: A pure culture is one in which only one kind of microbial species is found whereas in mixed culture two or more microbial species formed colonies.",
"rect answers could occur more frequently in those passages and usually share some commonalities, while incorrect answers are usually different from one another.",
"The example in Table 1 demonstrates this phenomenon.",
"We can see that the answer candidates extracted from the last four passages are all valid answers to the question and they are semantically similar to each other, while the answer candidates from the other two passages are incorrect and there is no supportive information from other passages.",
"As human beings usually compare the answer candidates from different sources to deduce the final answer, we hope that MRC model can also benefit from the cross-passage answer verification process.",
"The overall framework of our model is demonstrated in Figure 1 , which consists of three modules.",
"First, we follow the boundary-based MRC models (Seo et al., 2016; Wang and Jiang, 2016) to find an answer candidate for each passage by identifying the start and end position of the answer ( Figure 2) .",
"Second, we model the meanings of the answer candidates extracted from those passages and use the content scores to measure the quality of the candidates from a second perspective.",
"Third, we conduct the answer verification by enabling each answer candidate to attend to the other candidates based on their representations.",
"We hope that the answer candidates can collect supportive information from each other according to their semantic similarities and further decide whether each candidate is correct or not.",
"Therefore, the final answer is determined by three factors: the boundary, the content and the crosspassage answer verification.",
"The three steps are modeled using different modules, which can be jointly trained in our end-to-end framework.",
"We conduct extensive experiments on the MS-MARCO (Nguyen et al., 2016) and DuReader (He et al., 2017) datasets.",
"The results show that our answer verification MRC model outperforms the baseline models by a large margin and achieves the state-of-the-art performance on both datasets.",
"Figure 1 gives an overview of our multi-passage MRC model which is mainly composed of three modules including answer boundary prediction, answer content modeling and answer verification.",
"First of all, we need to model the question and passages.",
"Following Seo et al.",
"(2016) , we compute the question-aware representation for each passage (Section 2.1).",
"Based on this representation, we employ a Pointer Network (Vinyals et al., 2015) to predict the start and end position of the answer in the module of answer boundary prediction (Section 2.2).",
"At the same time, with the answer content model (Section 2.3), we estimate whether each word should be included in the answer and thus obtain the answer representations.",
"Next, in the answer verification module (Section 2.4), each answer candidate can attend to the other answer candidates to collect supportive information and we compute one score for each candidate Figure 1 : Overview of our method for multi-passage machine reading comprehension to indicate whether it is correct or not according to the verification.",
"The final answer is determined by not only the boundary but also the answer content and its verification score (Section 2.5).",
"Our Approach Question and Passage Modeling Given a question Q and a set of passages {P i } retrieved by search engines, our task is to find the best concise answer to the question.",
"First, we formally present the details of modeling the question and passages.",
"Encoding We first map each word into the vector space by concatenating its word embedding and sum of its character embeddings.",
"Then we employ bi-directional LSTMs (BiLSTM) to encode the question Q and passages {P i } as follows: u Q t = BiLSTM Q (u Q t−1 , [e Q Q-P Matching One essential step in MRC is to match the question with passages so that important information can be highlighted.",
"We use the Attention Flow Layer (Seo et al., 2016) to conduct the Q-P matching in two directions.",
"The similarity matrix S ∈ R |Q|×|P i | between the question and passage i is changed to a simpler version, where the similarity between the t th word in the question and the k th word in passage i is computed as: S t,k = u Q t · u P i k (3) Then the context-to-question attention and question-to-context attention is applied strictly following Seo et al.",
"(2016) to obtain the questionaware passage representation {ũ P i t }.",
"We do not give the details here due to space limitation.",
"Next, another BiLSTM is applied in order to fuse the contextual information and get the new representation for each word in the passage, which is regarded as the match output: v P i t = BiLSTM M (v P i t−1 ,ũ P i t ) (4) Based on the passage representations, we introduce the three main modules of our model.",
"Answer Boundary Prediction To extract the answer span from passages, mainstream studies try to locate the boundary of the answer, which is called boundary model.",
"Following (Wang and Jiang, 2016) , we employ Pointer Network (Vinyals et al., 2015) to compute the probability of each word to be the start or end position of the span: g t k = w a 1 tanh(W a 2 [v P k , h a t−1 ]) (5) α t k = exp(g t k )/ |P| j=1 exp(g t j ) (6) c t = |P| k=1 α t k v P k (7) h a t = LSTM(h a t−1 , c t ) (8) By utilizing the attention weights, the probability of the k th word in the passage to be the start and end position of the answer is obtained as α 1 k and α 2 k .",
"It should be noted that the pointer network is applied to the concatenation of all passages, which is denoted as P so that the probabilities are comparable across passages.",
"This boundary model can be trained by minimizing the negative log probabilities of the true start and end indices: L boundary = − 1 N N i=1 (log α 1 y 1 i + log α 2 y 2 i ) (9) where N is the number of samples in the dataset and y 1 i , y 2 i are the gold start and end positions.",
"Answer Content Modeling Previous work employs the boundary model to find the text span with the maximum boundary score as the final answer.",
"However, in our context, besides locating the answer candidates, we also need to model their meanings in order to conduct the verification.",
"An intuitive method is to compute the representation of the answer candidates separately after extracting them, but it could be hard to train such model end-to-end.",
"Here, we propose a novel method that can obtain the representation of the answer candidates based on probabilities.",
"Specifically, we change the output layer of the classic MRC model.",
"Besides predicting the boundary probabilities for the words in the passages, we also predict whether each word should be included in the content of the answer.",
"The content probability of the k th word is computed as: p c k = sigmoid(w c 1 ReLU(W c 2 v P i k )) (10) Training this content model is also quite intuitive.",
"We transform the boundary labels into a continuous segment, which means the words within the answer span will be labeled as 1 and other words will be labeled as 0.",
"In this way, we define the loss function as the averaged cross entropy: L content = − 1 N 1 |P| N i=1 |P | j=1 [y c k log p c k + (1 − y c k ) log(1 − p c k )] (11) The content probabilities provide another view to measure the quality of the answer in addition to the boundary.",
"Moreover, with these probabilities, we can represent the answer from passage i as a weighted sum of all the word embeddings in this passage: r A i = 1 |P i | |P i | k=1 p c k [e P i k , c P i k ] (12) Cross-Passage Answer Verification The boundary model and the content model focus on extracting and modeling the answer within a single passage respectively, with little consideration of the cross-passage information.",
"However, as is discussed in Section 1, there could be multiple answer candidates from different passages and some of them may mislead the MRC model to make an incorrect prediction.",
"It's necessary to aggregate the information from different passages and choose the best one from those candidates.",
"Therefore, we propose a method to enable the answer candidates to exchange information and verify each other through the cross-passage answer verification process.",
"Given the representation of the answer candidates from all passages {r A i }, each answer candidate then attends to other candidates to collect supportive information via attention mechanism: s i,j = 0, if i = j, r A i · r A j , otherwise (13) α i,j = exp(s i,j )/ n k=1 exp(s i,k ) (14) r A i = n j=1 α i,j r A j (15) Herer A i is the collected verification information from other passages based on the attention weights.",
"Then we pass it together with the original representation r A i to a fully connected layer: g v i = w v [r A i ,r A i , r A i r A i ] (16) We further normalize these scores over all passages to get the verification score for answer candidate A i : p v i = exp(g v i )/ n j=1 exp(g v j ) (17) In order to train this verification model, we take the answer from the gold passage as the gold answer.",
"And the loss function can be formulated as the negative log probability of the correct answer: L verif y = − 1 N N i=1 log p v y v i (18) where y v i is the index of the correct answer in all the answer candidates of the i th instance .",
"Joint Training and Prediction As is described above, we define three objectives for the reading comprehension model over multiple passages: 1. finding the boundary of the answer; 2. predicting whether each word should be included in the content; 3. selecting the best answer via cross-passage answer verification.",
"According to our design, these three tasks can share the same embedding, encoding and matching layers.",
"Therefore, we propose to train them together as multi-task learning (Ruder, 2017) .",
"The joint objective function is formulated as follows: L = L boundary + β 1 L content + β 2 L verif y (19) where β 1 and β 2 are two hyper-parameters that control the weights of those tasks.",
"When predicting the final answer, we take the boundary score, content score and verification score into consideration.",
"We first extract the answer candidate A i that has the maximum boundary score from each passage i.",
"This boundary score is computed as the product of the start and end probability of the answer span.",
"Then for each answer candidate A i , we average the content probabilities of all its words as the content score of A i .",
"And we can also predict the verification score for A i using the verification model.",
"Therefore, the final answer can be selected from all the answer candidates according to the product of these three scores.",
"Experiments To verify the effectiveness of our model on multipassage machine reading comprehension, we conduct experiments on the MS-MARCO (Nguyen et al., 2016) and DuReader (He et al., 2017) datasets.",
"Our method achieves the state-of-the-art performance on both datasets.",
"Datasets We choose the MS-MARCO and DuReader datasets to test our method, since both of them are One prerequisite for answer verification is that there should be multiple correct answers so that they can verify each other.",
"Both the MS-MARCO and DuReader datasets require the human annotators to generate multiple answers if possible.",
"Table 2 shows the proportion of questions that have multiple answers.",
"However, the same answer that occurs many times is treated as one single answer here.",
"Therefore, we also report the proportion of questions that have multiple answer spans to match with the human-generated answers.",
"A span is taken as valid if it can achieve F1 score larger than 0.7 compared with any reference answer.",
"From these statistics, we can see that the phenomenon of multiple answers is quite common for both MS-MARCO and DuReader.",
"These answers will provide strong signals for answer verification if we can leverage them properly.",
"Implementation Details For MS-MARCO, we preprocess the corpus with the reversible tokenizer from Stanford CoreNLP and we choose the span that achieves the highest ROUGE-L score with the reference answers as the gold span for training.",
"We employ the 300-D pre-trained Glove embeddings (Pennington et al., 2014) and keep it fixed during training.",
"The character embeddings are randomly initialized with its dimension as 30.",
"For DuReader, we follow the preprocessing described in He et al.",
"(2017) .",
"We tune the hyper-parameters according to the Model ROUGE-L BLEU-1 FastQA Ext (Weissenborn et al., 2017) 33.67 33.93 Prediction (Wang and Jiang, 2016) 37.33 40.72 ReasoNet (Shen et al., 2017) 38.81 39.86 R-Net (Wang et al., 2017c) 42.89 42.22 S-Net (Tan et al., 2017) 45 Two simple yet effective technologies are employed to improve the final performance on these two datasets respectively.",
"For MS-MARCO, approximately 8% questions have the answers as Yes or No, which usually cannot be solved by extractive approach (Tan et al., 2017) .",
"We address this problem by training a simple Yes/No classifier for those questions with certain patterns (e.g., starting with \"is\").",
"Concretely, we simply change the output layer of the basic boundary model so that it can predict whether the answer is \"Yes\" or \"No\".",
"For DuReader, the retrieved document usually contains a large number of paragraphs that cannot be fed into MRC models directly (He et al., 2017) .",
"The original paper employs a simple a simple heuristic strategy to select a representative paragraph for each document, while we train a paragraph ranking model for this.",
"We will demonstrate the effects of these two technologies later.",
"Table 3 shows the results of our system and other state-of-the-art models on the MS-MARCO test set.",
"We adopt the official evaluation metrics, including ROUGE-L (Lin, 2004) and BLEU-1 (Papineni et al., 2002) .",
"As we can see, for both metrics, our single model outperforms all the other competing models with an evident margin, which is extremely hard considering the near-human per- formance.",
"If we ensemble the models trained with different random seeds and hyper-parameters, the results can be further improved and outperform the ensemble model in Tan et al.",
"(2017) , especially in terms of the BLEU-1.",
"Results on MS-MARCO Results on DuReader The results of our model and several baseline systems on the test set of DuReader are shown in Table 4 .",
"The BiDAF and Match-LSTM models are provided as two baseline systems (He et al., 2017) .",
"Based on BiDAF, as is described in Section 3.2, we tried a new paragraph selection strategy by employing a paragraph ranking (PR) model.",
"We can see that this paragraph ranking can boost the BiDAF baseline significantly.",
"Finally, we implement our system based on this new strategy, and our system (single model) achieves further improvement by a large margin.",
"Question: What is the difference between a mixed and pure culture Scores Answer Candidates: Boundary Content Verification [1] A culture is a society's total way of living and a society is a group .",
".",
".",
"1.0 × 10 −2 1.0 × 10 −1 1.1 × 10 −1 [2] The mixed economy is a balance between socialism and capitalism.",
"1.0 × 10 −4 4.0 × 10 −2 3.2 × 10 −2 [3] A pure culture is one in which only one kind of microbial species is .",
".",
".",
"5.5 × 10 −3 7.7 × 10 −2 1.2 × 10 −1 [4] A pure culture comprises a single species or strains.",
"A mixed .",
".",
".",
"2.7 × 10 −3 8.1 × 10 −2 1.3 × 10 −1 [5] A pure culture is a culture consisting of only one strain.",
"5.8 × 10 −4 7.9 × 10 −2 5.1 × 10 −2 [6] A pure culture is one in which only one kind of microbial species .",
".",
".",
"5.8 × 10 −3 9.1 × 10 −2 2.7 × 10 −1 .",
".",
".",
".",
".",
".",
".",
".",
".",
".",
".",
".",
"Analysis and Discussion Ablation Study To get better insight into our system, we conduct in-depth ablation study on the development set of MS-MARCO, which is shown in Table 5 .",
"Following Tan et al.",
"(2017) , we mainly focus on the ROUGE-L score that is averaged case by case.",
"We first evaluate the answer verification by ablating the cross-passage verification model so that the verification loss and verification score will not be used during training and testing.",
"Then we remove the content model in order to test the necessity of modeling the content of the answer.",
"Since we don't have the content scores, we use the boundary probabilities instead to compute the answer representation for verification.",
"Next, to show the benefits of joint training, we train the boundary model separately from the other two models.",
"Finally, we remove the yes/no classification in order to show the real improvement of our end-toend model compared with the baseline method that predicts the answer with only the boundary model.",
"From Table 5 , we can see that the answer verification makes a great contribution to the overall improvement, which confirms our hypothesis that cross-passage answer verification is useful for the multi-passage MRC.",
"For the ablation of the content model, we analyze that it will not only affect the content score itself, but also violate the verification model since the content probabilities are necessary for the answer representation, which will be further analyzed in Section 4.3.",
"Another discovery is that jointly training the three models can provide great benefits, which shows that the three tasks are actually closely related and can boost each other with shared representations at bottom layers.",
"At last, comparing our method with the baseline, we achieve an improvement of nearly 3 points without the yes/no classification.",
"This significant improvement proves the effectiveness of our approach.",
"Case Study To demonstrate how each module of our model takes effect when predicting the final answer, we conduct a case study in Table 6 with the same example that we discussed in Section 1.",
"For each answer candidate, we list three scores predicted by the boundary model, content model and verification model respectively.",
"On the one hand, we can see that these three scores generally have some relevance.",
"For example, the second candidate is given lowest scores by all the three models.",
"We analyze that this is because the models share the same encoding and matching layers at bottom level and this relevance guarantees that the content and verification models will not violate the boundary model too much.",
"On the other hand, we also see that the verification score can really make a difference here when the boundary model makes an incorrect decision among the confusing answer candidates ([1], [3], [4], [6] ).",
"Besides, as we expected, the verification model tends to give higher scores for those answers that have semantic commonality with each other ([3] , [4], [6] ), which are all valid answers in this case.",
"By multiplying the three scores, our model finally predicts the answer correctly.",
"Necessity of the Content Model In our model, we compute the answer representation based on the content probabilities predicted by a separate content model instead of directly using the boundary probabilities.",
"We argue that this content model is necessary for our answer verification process.",
"Figure 2 plots the predicted content probabilities as well as the boundary probabilities The noun charge unit has 1 sense : 1 .",
"a measure of the quantity of electricity -LRB-determined by the amount of an electric current and the time for which it flows -RRB-.",
"familiarity info : charge unit used as a noun is very rare .",
"start probability end probability content probability Figure 2 : The boundary probabilities and content probabilities for the words in a passage for a passage.",
"We can see that the boundary and content probabilities capture different aspects of the answer.",
"Since answer candidates usually have similar boundary words, if we compute the answer representation based on the boundary probabilities, it's difficult to model the real difference among different answer candidates.",
"On the contrary, with the content probabilities, we pay more attention to the content part of the answer, which can provide more distinguishable information for verifying the correct answer.",
"Furthermore, the content probabilities can also adjust the weights of the words within the answer span so that unimportant words (e.g.",
"\"and\" and \".\")",
"get lower weights in the final answer representation.",
"We believe that this refined representation is also good for the answer verification process.",
"Related Work Machine reading comprehension made rapid progress in recent years, especially for singlepassage MRC task, such as SQuAD (Rajpurkar et al., 2016) .",
"Mainstream studies (Seo et al., 2016; Wang and Jiang, 2016; Xiong et al., 2016) treat reading comprehension as extracting answer span from the given passage, which is usually achieved by predicting the start and end position of the answer.",
"We implement our boundary model similarly by employing the boundary-based pointer network (Wang and Jiang, 2016) .",
"Another inspiring work is from Wang et al.",
"(2017c) , where the authors propose to match the passage against itself so that the representation can aggregate evidence from the whole passage.",
"Our verification model adopts a similar idea.",
"However, we collect information across passages and our attention is based on the answer representation, which is much more efficient than attention over all passages.",
"For the model training, Xiong et al.",
"(2017) argues that the boundary loss encourages exact answers at the cost of penalizing overlapping answers.",
"Therefore they propose a mixed objective that incorporates rewards derived from word overlap.",
"Our joint training approach has a similar function.",
"By taking the content and verification loss into consideration, our model will give less loss for overlapping answers than those unmatched answers, and our loss function is totally differentiable.",
"Recently, we also see emerging interests in multi-passage MRC from both the academic (Dunn et al., 2017; Joshi et al., 2017) and industrial community (Nguyen et al., 2016; He et al., 2017) .",
"Early studies (Shen et al., 2017; Wang et al., 2017c) usually concat those passages and employ the same models designed for singlepassage MRC.",
"However, more and more latest studies start to design specific methods that can read multiple passages more effectively.",
"In the aspect of passage selection, Wang et al.",
"(2017a) introduced a pipelined approach that rank the passages first and then read the selected passages for answering questions.",
"Tan et al.",
"(2017) treats the passage ranking as an auxiliary task that can be trained jointly with the reading comprehension model.",
"Actually, the target of our answer verification is very similar to that of the passage selection, while we pay more attention to the answer content and the answer verification process.",
"Speaking of the answer verification, Wang et al.",
"(2017b) has a similar motivation to ours.",
"They attempt to aggregate the evidence from different passages and choose the final answer from n-best candidates.",
"However, they implement their idea as a separate reranking step after reading comprehension, while our answer verification is a component of the whole model that can be trained end-to-end.",
"Conclusion In this paper, we propose an end-to-end framework to tackle the multi-passage MRC task .",
"We creatively design three different modules in our model, which can find the answer boundary, model the answer content and conduct cross-passage answer verification respectively.",
"All these three modules can be trained with different forms of the answer labels and training them jointly can provide further improvement.",
"The experimental results demonstrate that our model outperforms the baseline models by a large margin and achieves the state-of-the-art performance on two challenging datasets, both of which are designed for MRC on real web data."
]
} | {
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.3",
"2.4",
"2.5",
"3",
"3.1",
"3.2",
"3.4",
"4.1",
"4.2",
"4.3",
"5",
"6"
],
"paper_header_content": [
"Introduction",
"Question and Passage Modeling",
"Answer Boundary Prediction",
"Answer Content Modeling",
"Cross-Passage Answer Verification",
"Joint Training and Prediction",
"Experiments",
"Datasets",
"Implementation Details",
"Results on DuReader",
"Ablation Study",
"Case Study",
"Necessity of the Content Model",
"Related Work",
"Conclusion"
]
} | GEM-SciDuet-train-28#paper-1035#slide-8 | Answer Content Modeling | Question Passage 1 Passage 2 os Passage n
* Content score for each
Answer A, Answer Az
He | ye | ae | Question Passage 1 Passage 2 os Passage n
* Content score for each
Answer A, Answer Az
He | ye | ae | [] |
GEM-SciDuet-train-28#paper-1035#slide-9 | 1035 | Multi-Passage Machine Reading Comprehension with Cross-Passage Answer Verification | Machine reading comprehension (MRC) on real web data usually requires the machine to answer a question by analyzing multiple passages retrieved by search engine. Compared with MRC on a single passage, multi-passage MRC is more challenging, since we are likely to get multiple confusing answer candidates from different passages. To address this problem, we propose an end-to-end neural model that enables those answer candidates from different passages to verify each other based on their content representations. Specifically, we jointly train three modules that can predict the final answer based on three factors: the answer boundary, the answer content and the cross-passage answer verification. The experimental results show that our method outperforms the baseline by a large margin and achieves the state-of-the-art performance on the English MS-MARCO dataset and the Chinese DuReader dataset, both of which are designed for MRC in real-world settings. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231
],
"paper_content_text": [
"Introduction Machine reading comprehension (MRC), empowering computers with the ability to acquire knowledge and answer questions from textual data, is believed to be a crucial step in building a general intelligent agent (Chen et al., 2016) .",
"Recent years have seen rapid growth in the MRC community.",
"With the release of various datasets, the MRC task has evolved from the early cloze-style test (Hermann et al., 2015; Hill et al., 2015) to answer extraction from a single passage (Rajpurkar et al., 2016) and to the latest more complex question answering on web data (Nguyen et al., 2016; Dunn et al., 2017; He et al., 2017) .",
"Great efforts have also been made to develop models for these MRC tasks , especially for the answer extraction on single passage (Wang and Jiang, 2016; Seo et al., 2016; Pan et al., 2017) .",
"A significant milestone is that several MRC models have exceeded the performance of human annotators on the SQuAD dataset 1 (Rajpurkar et al., 2016 ).",
"However, this success on single Wikipedia passage is still not adequate, considering the ultimate goal of reading the whole web.",
"Therefore, several latest datasets (Nguyen et al., 2016; He et al., 2017; Dunn et al., 2017) attempt to design the MRC tasks in more realistic settings by involving search engines.",
"For each question, they use the search engine to retrieve multiple passages and the MRC models are required to read these passages in order to give the final answer.",
"One of the intrinsic challenges for such multipassage MRC is that since all the passages are question-related but usually independently written, it's probable that multiple confusing answer candidates (correct or incorrect) exist.",
"Table 1 shows an example from MS-MARCO.",
"We can see that all the answer candidates have semantic matching with the question while they are literally different and some of them are even incorrect.",
"As is shown by Jia and Liang (2017) , these confusing answer candidates could be quite difficult for MRC models to distinguish.",
"Therefore, special consideration is required for such multi-passage MRC problem.",
"In this paper, we propose to leverage the answer candidates from different passages to verify the final correct answer and rule out the noisy incorrect answers.",
"Our hypothesis is that the cor-Question: What is the difference between a mixed and pure culture?",
"Passages: [1] A culture is a society's total way of living and a society is a group that live in a defined territory and participate in common culture.",
"While the answer given is in essence true, societies originally form for the express purpose to enhance .",
".",
".",
"[2] .",
".",
".",
"There has been resurgence in the economic system known as capitalism during the past two decades.",
"4.",
"The mixed economy is a balance between socialism and capitalism.",
"As a result, some institutions are owned and maintained by .",
".",
".",
"[3] A pure culture is one in which only one kind of microbial species is found whereas in mixed culture two or more microbial species formed colonies.",
"Culture on the other hand, is the lifestyle that the people in the country .",
".",
".",
"[4] Best Answer: A pure culture comprises a single species or strains.",
"A mixed culture is taken from a source and may contain multiple strains or species.",
"A contaminated culture contains organisms that derived from some place .",
".",
".",
"[5] .",
".",
".",
"It will be at that time when we can truly obtain a pure culture.",
"A pure culture is a culture consisting of only one strain.",
"You can obtain a pure culture by picking out a small portion of the mixed culture .",
".",
".",
"[6] A pure culture is one in which only one kind of microbial species is found whereas in mixed culture two or more microbial species formed colonies.",
"A pure culture is a culture consisting of only one strain.",
".",
".",
".",
"· · · · · · Reference Answer: A pure culture is one in which only one kind of microbial species is found whereas in mixed culture two or more microbial species formed colonies.",
"rect answers could occur more frequently in those passages and usually share some commonalities, while incorrect answers are usually different from one another.",
"The example in Table 1 demonstrates this phenomenon.",
"We can see that the answer candidates extracted from the last four passages are all valid answers to the question and they are semantically similar to each other, while the answer candidates from the other two passages are incorrect and there is no supportive information from other passages.",
"As human beings usually compare the answer candidates from different sources to deduce the final answer, we hope that MRC model can also benefit from the cross-passage answer verification process.",
"The overall framework of our model is demonstrated in Figure 1 , which consists of three modules.",
"First, we follow the boundary-based MRC models (Seo et al., 2016; Wang and Jiang, 2016) to find an answer candidate for each passage by identifying the start and end position of the answer ( Figure 2) .",
"Second, we model the meanings of the answer candidates extracted from those passages and use the content scores to measure the quality of the candidates from a second perspective.",
"Third, we conduct the answer verification by enabling each answer candidate to attend to the other candidates based on their representations.",
"We hope that the answer candidates can collect supportive information from each other according to their semantic similarities and further decide whether each candidate is correct or not.",
"Therefore, the final answer is determined by three factors: the boundary, the content and the crosspassage answer verification.",
"The three steps are modeled using different modules, which can be jointly trained in our end-to-end framework.",
"We conduct extensive experiments on the MS-MARCO (Nguyen et al., 2016) and DuReader (He et al., 2017) datasets.",
"The results show that our answer verification MRC model outperforms the baseline models by a large margin and achieves the state-of-the-art performance on both datasets.",
"Figure 1 gives an overview of our multi-passage MRC model which is mainly composed of three modules including answer boundary prediction, answer content modeling and answer verification.",
"First of all, we need to model the question and passages.",
"Following Seo et al.",
"(2016) , we compute the question-aware representation for each passage (Section 2.1).",
"Based on this representation, we employ a Pointer Network (Vinyals et al., 2015) to predict the start and end position of the answer in the module of answer boundary prediction (Section 2.2).",
"At the same time, with the answer content model (Section 2.3), we estimate whether each word should be included in the answer and thus obtain the answer representations.",
"Next, in the answer verification module (Section 2.4), each answer candidate can attend to the other answer candidates to collect supportive information and we compute one score for each candidate Figure 1 : Overview of our method for multi-passage machine reading comprehension to indicate whether it is correct or not according to the verification.",
"The final answer is determined by not only the boundary but also the answer content and its verification score (Section 2.5).",
"Our Approach Question and Passage Modeling Given a question Q and a set of passages {P i } retrieved by search engines, our task is to find the best concise answer to the question.",
"First, we formally present the details of modeling the question and passages.",
"Encoding We first map each word into the vector space by concatenating its word embedding and sum of its character embeddings.",
"Then we employ bi-directional LSTMs (BiLSTM) to encode the question Q and passages {P i } as follows: u Q t = BiLSTM Q (u Q t−1 , [e Q Q-P Matching One essential step in MRC is to match the question with passages so that important information can be highlighted.",
"We use the Attention Flow Layer (Seo et al., 2016) to conduct the Q-P matching in two directions.",
"The similarity matrix S ∈ R |Q|×|P i | between the question and passage i is changed to a simpler version, where the similarity between the t th word in the question and the k th word in passage i is computed as: S t,k = u Q t · u P i k (3) Then the context-to-question attention and question-to-context attention is applied strictly following Seo et al.",
"(2016) to obtain the questionaware passage representation {ũ P i t }.",
"We do not give the details here due to space limitation.",
"Next, another BiLSTM is applied in order to fuse the contextual information and get the new representation for each word in the passage, which is regarded as the match output: v P i t = BiLSTM M (v P i t−1 ,ũ P i t ) (4) Based on the passage representations, we introduce the three main modules of our model.",
"Answer Boundary Prediction To extract the answer span from passages, mainstream studies try to locate the boundary of the answer, which is called boundary model.",
"Following (Wang and Jiang, 2016) , we employ Pointer Network (Vinyals et al., 2015) to compute the probability of each word to be the start or end position of the span: g t k = w a 1 tanh(W a 2 [v P k , h a t−1 ]) (5) α t k = exp(g t k )/ |P| j=1 exp(g t j ) (6) c t = |P| k=1 α t k v P k (7) h a t = LSTM(h a t−1 , c t ) (8) By utilizing the attention weights, the probability of the k th word in the passage to be the start and end position of the answer is obtained as α 1 k and α 2 k .",
"It should be noted that the pointer network is applied to the concatenation of all passages, which is denoted as P so that the probabilities are comparable across passages.",
"This boundary model can be trained by minimizing the negative log probabilities of the true start and end indices: L boundary = − 1 N N i=1 (log α 1 y 1 i + log α 2 y 2 i ) (9) where N is the number of samples in the dataset and y 1 i , y 2 i are the gold start and end positions.",
"Answer Content Modeling Previous work employs the boundary model to find the text span with the maximum boundary score as the final answer.",
"However, in our context, besides locating the answer candidates, we also need to model their meanings in order to conduct the verification.",
"An intuitive method is to compute the representation of the answer candidates separately after extracting them, but it could be hard to train such model end-to-end.",
"Here, we propose a novel method that can obtain the representation of the answer candidates based on probabilities.",
"Specifically, we change the output layer of the classic MRC model.",
"Besides predicting the boundary probabilities for the words in the passages, we also predict whether each word should be included in the content of the answer.",
"The content probability of the k th word is computed as: p c k = sigmoid(w c 1 ReLU(W c 2 v P i k )) (10) Training this content model is also quite intuitive.",
"We transform the boundary labels into a continuous segment, which means the words within the answer span will be labeled as 1 and other words will be labeled as 0.",
"In this way, we define the loss function as the averaged cross entropy: L content = − 1 N 1 |P| N i=1 |P | j=1 [y c k log p c k + (1 − y c k ) log(1 − p c k )] (11) The content probabilities provide another view to measure the quality of the answer in addition to the boundary.",
"Moreover, with these probabilities, we can represent the answer from passage i as a weighted sum of all the word embeddings in this passage: r A i = 1 |P i | |P i | k=1 p c k [e P i k , c P i k ] (12) Cross-Passage Answer Verification The boundary model and the content model focus on extracting and modeling the answer within a single passage respectively, with little consideration of the cross-passage information.",
"However, as is discussed in Section 1, there could be multiple answer candidates from different passages and some of them may mislead the MRC model to make an incorrect prediction.",
"It's necessary to aggregate the information from different passages and choose the best one from those candidates.",
"Therefore, we propose a method to enable the answer candidates to exchange information and verify each other through the cross-passage answer verification process.",
"Given the representation of the answer candidates from all passages {r A i }, each answer candidate then attends to other candidates to collect supportive information via attention mechanism: s i,j = 0, if i = j, r A i · r A j , otherwise (13) α i,j = exp(s i,j )/ n k=1 exp(s i,k ) (14) r A i = n j=1 α i,j r A j (15) Herer A i is the collected verification information from other passages based on the attention weights.",
"Then we pass it together with the original representation r A i to a fully connected layer: g v i = w v [r A i ,r A i , r A i r A i ] (16) We further normalize these scores over all passages to get the verification score for answer candidate A i : p v i = exp(g v i )/ n j=1 exp(g v j ) (17) In order to train this verification model, we take the answer from the gold passage as the gold answer.",
"And the loss function can be formulated as the negative log probability of the correct answer: L verif y = − 1 N N i=1 log p v y v i (18) where y v i is the index of the correct answer in all the answer candidates of the i th instance .",
"Joint Training and Prediction As is described above, we define three objectives for the reading comprehension model over multiple passages: 1. finding the boundary of the answer; 2. predicting whether each word should be included in the content; 3. selecting the best answer via cross-passage answer verification.",
"According to our design, these three tasks can share the same embedding, encoding and matching layers.",
"Therefore, we propose to train them together as multi-task learning (Ruder, 2017) .",
"The joint objective function is formulated as follows: L = L boundary + β 1 L content + β 2 L verif y (19) where β 1 and β 2 are two hyper-parameters that control the weights of those tasks.",
"When predicting the final answer, we take the boundary score, content score and verification score into consideration.",
"We first extract the answer candidate A i that has the maximum boundary score from each passage i.",
"This boundary score is computed as the product of the start and end probability of the answer span.",
"Then for each answer candidate A i , we average the content probabilities of all its words as the content score of A i .",
"And we can also predict the verification score for A i using the verification model.",
"Therefore, the final answer can be selected from all the answer candidates according to the product of these three scores.",
"Experiments To verify the effectiveness of our model on multipassage machine reading comprehension, we conduct experiments on the MS-MARCO (Nguyen et al., 2016) and DuReader (He et al., 2017) datasets.",
"Our method achieves the state-of-the-art performance on both datasets.",
"Datasets We choose the MS-MARCO and DuReader datasets to test our method, since both of them are One prerequisite for answer verification is that there should be multiple correct answers so that they can verify each other.",
"Both the MS-MARCO and DuReader datasets require the human annotators to generate multiple answers if possible.",
"Table 2 shows the proportion of questions that have multiple answers.",
"However, the same answer that occurs many times is treated as one single answer here.",
"Therefore, we also report the proportion of questions that have multiple answer spans to match with the human-generated answers.",
"A span is taken as valid if it can achieve F1 score larger than 0.7 compared with any reference answer.",
"From these statistics, we can see that the phenomenon of multiple answers is quite common for both MS-MARCO and DuReader.",
"These answers will provide strong signals for answer verification if we can leverage them properly.",
"Implementation Details For MS-MARCO, we preprocess the corpus with the reversible tokenizer from Stanford CoreNLP and we choose the span that achieves the highest ROUGE-L score with the reference answers as the gold span for training.",
"We employ the 300-D pre-trained Glove embeddings (Pennington et al., 2014) and keep it fixed during training.",
"The character embeddings are randomly initialized with its dimension as 30.",
"For DuReader, we follow the preprocessing described in He et al.",
"(2017) .",
"We tune the hyper-parameters according to the Model ROUGE-L BLEU-1 FastQA Ext (Weissenborn et al., 2017) 33.67 33.93 Prediction (Wang and Jiang, 2016) 37.33 40.72 ReasoNet (Shen et al., 2017) 38.81 39.86 R-Net (Wang et al., 2017c) 42.89 42.22 S-Net (Tan et al., 2017) 45 Two simple yet effective technologies are employed to improve the final performance on these two datasets respectively.",
"For MS-MARCO, approximately 8% questions have the answers as Yes or No, which usually cannot be solved by extractive approach (Tan et al., 2017) .",
"We address this problem by training a simple Yes/No classifier for those questions with certain patterns (e.g., starting with \"is\").",
"Concretely, we simply change the output layer of the basic boundary model so that it can predict whether the answer is \"Yes\" or \"No\".",
"For DuReader, the retrieved document usually contains a large number of paragraphs that cannot be fed into MRC models directly (He et al., 2017) .",
"The original paper employs a simple a simple heuristic strategy to select a representative paragraph for each document, while we train a paragraph ranking model for this.",
"We will demonstrate the effects of these two technologies later.",
"Table 3 shows the results of our system and other state-of-the-art models on the MS-MARCO test set.",
"We adopt the official evaluation metrics, including ROUGE-L (Lin, 2004) and BLEU-1 (Papineni et al., 2002) .",
"As we can see, for both metrics, our single model outperforms all the other competing models with an evident margin, which is extremely hard considering the near-human per- formance.",
"If we ensemble the models trained with different random seeds and hyper-parameters, the results can be further improved and outperform the ensemble model in Tan et al.",
"(2017) , especially in terms of the BLEU-1.",
"Results on MS-MARCO Results on DuReader The results of our model and several baseline systems on the test set of DuReader are shown in Table 4 .",
"The BiDAF and Match-LSTM models are provided as two baseline systems (He et al., 2017) .",
"Based on BiDAF, as is described in Section 3.2, we tried a new paragraph selection strategy by employing a paragraph ranking (PR) model.",
"We can see that this paragraph ranking can boost the BiDAF baseline significantly.",
"Finally, we implement our system based on this new strategy, and our system (single model) achieves further improvement by a large margin.",
"Question: What is the difference between a mixed and pure culture Scores Answer Candidates: Boundary Content Verification [1] A culture is a society's total way of living and a society is a group .",
".",
".",
"1.0 × 10 −2 1.0 × 10 −1 1.1 × 10 −1 [2] The mixed economy is a balance between socialism and capitalism.",
"1.0 × 10 −4 4.0 × 10 −2 3.2 × 10 −2 [3] A pure culture is one in which only one kind of microbial species is .",
".",
".",
"5.5 × 10 −3 7.7 × 10 −2 1.2 × 10 −1 [4] A pure culture comprises a single species or strains.",
"A mixed .",
".",
".",
"2.7 × 10 −3 8.1 × 10 −2 1.3 × 10 −1 [5] A pure culture is a culture consisting of only one strain.",
"5.8 × 10 −4 7.9 × 10 −2 5.1 × 10 −2 [6] A pure culture is one in which only one kind of microbial species .",
".",
".",
"5.8 × 10 −3 9.1 × 10 −2 2.7 × 10 −1 .",
".",
".",
".",
".",
".",
".",
".",
".",
".",
".",
".",
"Analysis and Discussion Ablation Study To get better insight into our system, we conduct in-depth ablation study on the development set of MS-MARCO, which is shown in Table 5 .",
"Following Tan et al.",
"(2017) , we mainly focus on the ROUGE-L score that is averaged case by case.",
"We first evaluate the answer verification by ablating the cross-passage verification model so that the verification loss and verification score will not be used during training and testing.",
"Then we remove the content model in order to test the necessity of modeling the content of the answer.",
"Since we don't have the content scores, we use the boundary probabilities instead to compute the answer representation for verification.",
"Next, to show the benefits of joint training, we train the boundary model separately from the other two models.",
"Finally, we remove the yes/no classification in order to show the real improvement of our end-toend model compared with the baseline method that predicts the answer with only the boundary model.",
"From Table 5 , we can see that the answer verification makes a great contribution to the overall improvement, which confirms our hypothesis that cross-passage answer verification is useful for the multi-passage MRC.",
"For the ablation of the content model, we analyze that it will not only affect the content score itself, but also violate the verification model since the content probabilities are necessary for the answer representation, which will be further analyzed in Section 4.3.",
"Another discovery is that jointly training the three models can provide great benefits, which shows that the three tasks are actually closely related and can boost each other with shared representations at bottom layers.",
"At last, comparing our method with the baseline, we achieve an improvement of nearly 3 points without the yes/no classification.",
"This significant improvement proves the effectiveness of our approach.",
"Case Study To demonstrate how each module of our model takes effect when predicting the final answer, we conduct a case study in Table 6 with the same example that we discussed in Section 1.",
"For each answer candidate, we list three scores predicted by the boundary model, content model and verification model respectively.",
"On the one hand, we can see that these three scores generally have some relevance.",
"For example, the second candidate is given lowest scores by all the three models.",
"We analyze that this is because the models share the same encoding and matching layers at bottom level and this relevance guarantees that the content and verification models will not violate the boundary model too much.",
"On the other hand, we also see that the verification score can really make a difference here when the boundary model makes an incorrect decision among the confusing answer candidates ([1], [3], [4], [6] ).",
"Besides, as we expected, the verification model tends to give higher scores for those answers that have semantic commonality with each other ([3] , [4], [6] ), which are all valid answers in this case.",
"By multiplying the three scores, our model finally predicts the answer correctly.",
"Necessity of the Content Model In our model, we compute the answer representation based on the content probabilities predicted by a separate content model instead of directly using the boundary probabilities.",
"We argue that this content model is necessary for our answer verification process.",
"Figure 2 plots the predicted content probabilities as well as the boundary probabilities The noun charge unit has 1 sense : 1 .",
"a measure of the quantity of electricity -LRB-determined by the amount of an electric current and the time for which it flows -RRB-.",
"familiarity info : charge unit used as a noun is very rare .",
"start probability end probability content probability Figure 2 : The boundary probabilities and content probabilities for the words in a passage for a passage.",
"We can see that the boundary and content probabilities capture different aspects of the answer.",
"Since answer candidates usually have similar boundary words, if we compute the answer representation based on the boundary probabilities, it's difficult to model the real difference among different answer candidates.",
"On the contrary, with the content probabilities, we pay more attention to the content part of the answer, which can provide more distinguishable information for verifying the correct answer.",
"Furthermore, the content probabilities can also adjust the weights of the words within the answer span so that unimportant words (e.g.",
"\"and\" and \".\")",
"get lower weights in the final answer representation.",
"We believe that this refined representation is also good for the answer verification process.",
"Related Work Machine reading comprehension made rapid progress in recent years, especially for singlepassage MRC task, such as SQuAD (Rajpurkar et al., 2016) .",
"Mainstream studies (Seo et al., 2016; Wang and Jiang, 2016; Xiong et al., 2016) treat reading comprehension as extracting answer span from the given passage, which is usually achieved by predicting the start and end position of the answer.",
"We implement our boundary model similarly by employing the boundary-based pointer network (Wang and Jiang, 2016) .",
"Another inspiring work is from Wang et al.",
"(2017c) , where the authors propose to match the passage against itself so that the representation can aggregate evidence from the whole passage.",
"Our verification model adopts a similar idea.",
"However, we collect information across passages and our attention is based on the answer representation, which is much more efficient than attention over all passages.",
"For the model training, Xiong et al.",
"(2017) argues that the boundary loss encourages exact answers at the cost of penalizing overlapping answers.",
"Therefore they propose a mixed objective that incorporates rewards derived from word overlap.",
"Our joint training approach has a similar function.",
"By taking the content and verification loss into consideration, our model will give less loss for overlapping answers than those unmatched answers, and our loss function is totally differentiable.",
"Recently, we also see emerging interests in multi-passage MRC from both the academic (Dunn et al., 2017; Joshi et al., 2017) and industrial community (Nguyen et al., 2016; He et al., 2017) .",
"Early studies (Shen et al., 2017; Wang et al., 2017c) usually concat those passages and employ the same models designed for singlepassage MRC.",
"However, more and more latest studies start to design specific methods that can read multiple passages more effectively.",
"In the aspect of passage selection, Wang et al.",
"(2017a) introduced a pipelined approach that rank the passages first and then read the selected passages for answering questions.",
"Tan et al.",
"(2017) treats the passage ranking as an auxiliary task that can be trained jointly with the reading comprehension model.",
"Actually, the target of our answer verification is very similar to that of the passage selection, while we pay more attention to the answer content and the answer verification process.",
"Speaking of the answer verification, Wang et al.",
"(2017b) has a similar motivation to ours.",
"They attempt to aggregate the evidence from different passages and choose the final answer from n-best candidates.",
"However, they implement their idea as a separate reranking step after reading comprehension, while our answer verification is a component of the whole model that can be trained end-to-end.",
"Conclusion In this paper, we propose an end-to-end framework to tackle the multi-passage MRC task .",
"We creatively design three different modules in our model, which can find the answer boundary, model the answer content and conduct cross-passage answer verification respectively.",
"All these three modules can be trained with different forms of the answer labels and training them jointly can provide further improvement.",
"The experimental results demonstrate that our model outperforms the baseline models by a large margin and achieves the state-of-the-art performance on two challenging datasets, both of which are designed for MRC on real web data."
]
} | {
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.3",
"2.4",
"2.5",
"3",
"3.1",
"3.2",
"3.4",
"4.1",
"4.2",
"4.3",
"5",
"6"
],
"paper_header_content": [
"Introduction",
"Question and Passage Modeling",
"Answer Boundary Prediction",
"Answer Content Modeling",
"Cross-Passage Answer Verification",
"Joint Training and Prediction",
"Experiments",
"Datasets",
"Implementation Details",
"Results on DuReader",
"Ablation Study",
"Case Study",
"Necessity of the Content Model",
"Related Work",
"Conclusion"
]
} | GEM-SciDuet-train-28#paper-1035#slide-9 | Cross Passage Answer Verification | 7 rit -r4y, otherwise | 7 rit -r4y, otherwise | [] |
GEM-SciDuet-train-28#paper-1035#slide-10 | 1035 | Multi-Passage Machine Reading Comprehension with Cross-Passage Answer Verification | Machine reading comprehension (MRC) on real web data usually requires the machine to answer a question by analyzing multiple passages retrieved by search engine. Compared with MRC on a single passage, multi-passage MRC is more challenging, since we are likely to get multiple confusing answer candidates from different passages. To address this problem, we propose an end-to-end neural model that enables those answer candidates from different passages to verify each other based on their content representations. Specifically, we jointly train three modules that can predict the final answer based on three factors: the answer boundary, the answer content and the cross-passage answer verification. The experimental results show that our method outperforms the baseline by a large margin and achieves the state-of-the-art performance on the English MS-MARCO dataset and the Chinese DuReader dataset, both of which are designed for MRC in real-world settings. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231
],
"paper_content_text": [
"Introduction Machine reading comprehension (MRC), empowering computers with the ability to acquire knowledge and answer questions from textual data, is believed to be a crucial step in building a general intelligent agent (Chen et al., 2016) .",
"Recent years have seen rapid growth in the MRC community.",
"With the release of various datasets, the MRC task has evolved from the early cloze-style test (Hermann et al., 2015; Hill et al., 2015) to answer extraction from a single passage (Rajpurkar et al., 2016) and to the latest more complex question answering on web data (Nguyen et al., 2016; Dunn et al., 2017; He et al., 2017) .",
"Great efforts have also been made to develop models for these MRC tasks , especially for the answer extraction on single passage (Wang and Jiang, 2016; Seo et al., 2016; Pan et al., 2017) .",
"A significant milestone is that several MRC models have exceeded the performance of human annotators on the SQuAD dataset 1 (Rajpurkar et al., 2016 ).",
"However, this success on single Wikipedia passage is still not adequate, considering the ultimate goal of reading the whole web.",
"Therefore, several latest datasets (Nguyen et al., 2016; He et al., 2017; Dunn et al., 2017) attempt to design the MRC tasks in more realistic settings by involving search engines.",
"For each question, they use the search engine to retrieve multiple passages and the MRC models are required to read these passages in order to give the final answer.",
"One of the intrinsic challenges for such multipassage MRC is that since all the passages are question-related but usually independently written, it's probable that multiple confusing answer candidates (correct or incorrect) exist.",
"Table 1 shows an example from MS-MARCO.",
"We can see that all the answer candidates have semantic matching with the question while they are literally different and some of them are even incorrect.",
"As is shown by Jia and Liang (2017) , these confusing answer candidates could be quite difficult for MRC models to distinguish.",
"Therefore, special consideration is required for such multi-passage MRC problem.",
"In this paper, we propose to leverage the answer candidates from different passages to verify the final correct answer and rule out the noisy incorrect answers.",
"Our hypothesis is that the cor-Question: What is the difference between a mixed and pure culture?",
"Passages: [1] A culture is a society's total way of living and a society is a group that live in a defined territory and participate in common culture.",
"While the answer given is in essence true, societies originally form for the express purpose to enhance .",
".",
".",
"[2] .",
".",
".",
"There has been resurgence in the economic system known as capitalism during the past two decades.",
"4.",
"The mixed economy is a balance between socialism and capitalism.",
"As a result, some institutions are owned and maintained by .",
".",
".",
"[3] A pure culture is one in which only one kind of microbial species is found whereas in mixed culture two or more microbial species formed colonies.",
"Culture on the other hand, is the lifestyle that the people in the country .",
".",
".",
"[4] Best Answer: A pure culture comprises a single species or strains.",
"A mixed culture is taken from a source and may contain multiple strains or species.",
"A contaminated culture contains organisms that derived from some place .",
".",
".",
"[5] .",
".",
".",
"It will be at that time when we can truly obtain a pure culture.",
"A pure culture is a culture consisting of only one strain.",
"You can obtain a pure culture by picking out a small portion of the mixed culture .",
".",
".",
"[6] A pure culture is one in which only one kind of microbial species is found whereas in mixed culture two or more microbial species formed colonies.",
"A pure culture is a culture consisting of only one strain.",
".",
".",
".",
"· · · · · · Reference Answer: A pure culture is one in which only one kind of microbial species is found whereas in mixed culture two or more microbial species formed colonies.",
"rect answers could occur more frequently in those passages and usually share some commonalities, while incorrect answers are usually different from one another.",
"The example in Table 1 demonstrates this phenomenon.",
"We can see that the answer candidates extracted from the last four passages are all valid answers to the question and they are semantically similar to each other, while the answer candidates from the other two passages are incorrect and there is no supportive information from other passages.",
"As human beings usually compare the answer candidates from different sources to deduce the final answer, we hope that MRC model can also benefit from the cross-passage answer verification process.",
"The overall framework of our model is demonstrated in Figure 1 , which consists of three modules.",
"First, we follow the boundary-based MRC models (Seo et al., 2016; Wang and Jiang, 2016) to find an answer candidate for each passage by identifying the start and end position of the answer ( Figure 2) .",
"Second, we model the meanings of the answer candidates extracted from those passages and use the content scores to measure the quality of the candidates from a second perspective.",
"Third, we conduct the answer verification by enabling each answer candidate to attend to the other candidates based on their representations.",
"We hope that the answer candidates can collect supportive information from each other according to their semantic similarities and further decide whether each candidate is correct or not.",
"Therefore, the final answer is determined by three factors: the boundary, the content and the crosspassage answer verification.",
"The three steps are modeled using different modules, which can be jointly trained in our end-to-end framework.",
"We conduct extensive experiments on the MS-MARCO (Nguyen et al., 2016) and DuReader (He et al., 2017) datasets.",
"The results show that our answer verification MRC model outperforms the baseline models by a large margin and achieves the state-of-the-art performance on both datasets.",
"Figure 1 gives an overview of our multi-passage MRC model which is mainly composed of three modules including answer boundary prediction, answer content modeling and answer verification.",
"First of all, we need to model the question and passages.",
"Following Seo et al.",
"(2016) , we compute the question-aware representation for each passage (Section 2.1).",
"Based on this representation, we employ a Pointer Network (Vinyals et al., 2015) to predict the start and end position of the answer in the module of answer boundary prediction (Section 2.2).",
"At the same time, with the answer content model (Section 2.3), we estimate whether each word should be included in the answer and thus obtain the answer representations.",
"Next, in the answer verification module (Section 2.4), each answer candidate can attend to the other answer candidates to collect supportive information and we compute one score for each candidate Figure 1 : Overview of our method for multi-passage machine reading comprehension to indicate whether it is correct or not according to the verification.",
"The final answer is determined by not only the boundary but also the answer content and its verification score (Section 2.5).",
"Our Approach Question and Passage Modeling Given a question Q and a set of passages {P i } retrieved by search engines, our task is to find the best concise answer to the question.",
"First, we formally present the details of modeling the question and passages.",
"Encoding We first map each word into the vector space by concatenating its word embedding and sum of its character embeddings.",
"Then we employ bi-directional LSTMs (BiLSTM) to encode the question Q and passages {P i } as follows: u Q t = BiLSTM Q (u Q t−1 , [e Q Q-P Matching One essential step in MRC is to match the question with passages so that important information can be highlighted.",
"We use the Attention Flow Layer (Seo et al., 2016) to conduct the Q-P matching in two directions.",
"The similarity matrix S ∈ R |Q|×|P i | between the question and passage i is changed to a simpler version, where the similarity between the t th word in the question and the k th word in passage i is computed as: S t,k = u Q t · u P i k (3) Then the context-to-question attention and question-to-context attention is applied strictly following Seo et al.",
"(2016) to obtain the questionaware passage representation {ũ P i t }.",
"We do not give the details here due to space limitation.",
"Next, another BiLSTM is applied in order to fuse the contextual information and get the new representation for each word in the passage, which is regarded as the match output: v P i t = BiLSTM M (v P i t−1 ,ũ P i t ) (4) Based on the passage representations, we introduce the three main modules of our model.",
"Answer Boundary Prediction To extract the answer span from passages, mainstream studies try to locate the boundary of the answer, which is called boundary model.",
"Following (Wang and Jiang, 2016) , we employ Pointer Network (Vinyals et al., 2015) to compute the probability of each word to be the start or end position of the span: g t k = w a 1 tanh(W a 2 [v P k , h a t−1 ]) (5) α t k = exp(g t k )/ |P| j=1 exp(g t j ) (6) c t = |P| k=1 α t k v P k (7) h a t = LSTM(h a t−1 , c t ) (8) By utilizing the attention weights, the probability of the k th word in the passage to be the start and end position of the answer is obtained as α 1 k and α 2 k .",
"It should be noted that the pointer network is applied to the concatenation of all passages, which is denoted as P so that the probabilities are comparable across passages.",
"This boundary model can be trained by minimizing the negative log probabilities of the true start and end indices: L boundary = − 1 N N i=1 (log α 1 y 1 i + log α 2 y 2 i ) (9) where N is the number of samples in the dataset and y 1 i , y 2 i are the gold start and end positions.",
"Answer Content Modeling Previous work employs the boundary model to find the text span with the maximum boundary score as the final answer.",
"However, in our context, besides locating the answer candidates, we also need to model their meanings in order to conduct the verification.",
"An intuitive method is to compute the representation of the answer candidates separately after extracting them, but it could be hard to train such model end-to-end.",
"Here, we propose a novel method that can obtain the representation of the answer candidates based on probabilities.",
"Specifically, we change the output layer of the classic MRC model.",
"Besides predicting the boundary probabilities for the words in the passages, we also predict whether each word should be included in the content of the answer.",
"The content probability of the k th word is computed as: p c k = sigmoid(w c 1 ReLU(W c 2 v P i k )) (10) Training this content model is also quite intuitive.",
"We transform the boundary labels into a continuous segment, which means the words within the answer span will be labeled as 1 and other words will be labeled as 0.",
"In this way, we define the loss function as the averaged cross entropy: L content = − 1 N 1 |P| N i=1 |P | j=1 [y c k log p c k + (1 − y c k ) log(1 − p c k )] (11) The content probabilities provide another view to measure the quality of the answer in addition to the boundary.",
"Moreover, with these probabilities, we can represent the answer from passage i as a weighted sum of all the word embeddings in this passage: r A i = 1 |P i | |P i | k=1 p c k [e P i k , c P i k ] (12) Cross-Passage Answer Verification The boundary model and the content model focus on extracting and modeling the answer within a single passage respectively, with little consideration of the cross-passage information.",
"However, as is discussed in Section 1, there could be multiple answer candidates from different passages and some of them may mislead the MRC model to make an incorrect prediction.",
"It's necessary to aggregate the information from different passages and choose the best one from those candidates.",
"Therefore, we propose a method to enable the answer candidates to exchange information and verify each other through the cross-passage answer verification process.",
"Given the representation of the answer candidates from all passages {r A i }, each answer candidate then attends to other candidates to collect supportive information via attention mechanism: s i,j = 0, if i = j, r A i · r A j , otherwise (13) α i,j = exp(s i,j )/ n k=1 exp(s i,k ) (14) r A i = n j=1 α i,j r A j (15) Herer A i is the collected verification information from other passages based on the attention weights.",
"Then we pass it together with the original representation r A i to a fully connected layer: g v i = w v [r A i ,r A i , r A i r A i ] (16) We further normalize these scores over all passages to get the verification score for answer candidate A i : p v i = exp(g v i )/ n j=1 exp(g v j ) (17) In order to train this verification model, we take the answer from the gold passage as the gold answer.",
"And the loss function can be formulated as the negative log probability of the correct answer: L verif y = − 1 N N i=1 log p v y v i (18) where y v i is the index of the correct answer in all the answer candidates of the i th instance .",
"Joint Training and Prediction As is described above, we define three objectives for the reading comprehension model over multiple passages: 1. finding the boundary of the answer; 2. predicting whether each word should be included in the content; 3. selecting the best answer via cross-passage answer verification.",
"According to our design, these three tasks can share the same embedding, encoding and matching layers.",
"Therefore, we propose to train them together as multi-task learning (Ruder, 2017) .",
"The joint objective function is formulated as follows: L = L boundary + β 1 L content + β 2 L verif y (19) where β 1 and β 2 are two hyper-parameters that control the weights of those tasks.",
"When predicting the final answer, we take the boundary score, content score and verification score into consideration.",
"We first extract the answer candidate A i that has the maximum boundary score from each passage i.",
"This boundary score is computed as the product of the start and end probability of the answer span.",
"Then for each answer candidate A i , we average the content probabilities of all its words as the content score of A i .",
"And we can also predict the verification score for A i using the verification model.",
"Therefore, the final answer can be selected from all the answer candidates according to the product of these three scores.",
"Experiments To verify the effectiveness of our model on multipassage machine reading comprehension, we conduct experiments on the MS-MARCO (Nguyen et al., 2016) and DuReader (He et al., 2017) datasets.",
"Our method achieves the state-of-the-art performance on both datasets.",
"Datasets We choose the MS-MARCO and DuReader datasets to test our method, since both of them are One prerequisite for answer verification is that there should be multiple correct answers so that they can verify each other.",
"Both the MS-MARCO and DuReader datasets require the human annotators to generate multiple answers if possible.",
"Table 2 shows the proportion of questions that have multiple answers.",
"However, the same answer that occurs many times is treated as one single answer here.",
"Therefore, we also report the proportion of questions that have multiple answer spans to match with the human-generated answers.",
"A span is taken as valid if it can achieve F1 score larger than 0.7 compared with any reference answer.",
"From these statistics, we can see that the phenomenon of multiple answers is quite common for both MS-MARCO and DuReader.",
"These answers will provide strong signals for answer verification if we can leverage them properly.",
"Implementation Details For MS-MARCO, we preprocess the corpus with the reversible tokenizer from Stanford CoreNLP and we choose the span that achieves the highest ROUGE-L score with the reference answers as the gold span for training.",
"We employ the 300-D pre-trained Glove embeddings (Pennington et al., 2014) and keep it fixed during training.",
"The character embeddings are randomly initialized with its dimension as 30.",
"For DuReader, we follow the preprocessing described in He et al.",
"(2017) .",
"We tune the hyper-parameters according to the Model ROUGE-L BLEU-1 FastQA Ext (Weissenborn et al., 2017) 33.67 33.93 Prediction (Wang and Jiang, 2016) 37.33 40.72 ReasoNet (Shen et al., 2017) 38.81 39.86 R-Net (Wang et al., 2017c) 42.89 42.22 S-Net (Tan et al., 2017) 45 Two simple yet effective technologies are employed to improve the final performance on these two datasets respectively.",
"For MS-MARCO, approximately 8% questions have the answers as Yes or No, which usually cannot be solved by extractive approach (Tan et al., 2017) .",
"We address this problem by training a simple Yes/No classifier for those questions with certain patterns (e.g., starting with \"is\").",
"Concretely, we simply change the output layer of the basic boundary model so that it can predict whether the answer is \"Yes\" or \"No\".",
"For DuReader, the retrieved document usually contains a large number of paragraphs that cannot be fed into MRC models directly (He et al., 2017) .",
"The original paper employs a simple a simple heuristic strategy to select a representative paragraph for each document, while we train a paragraph ranking model for this.",
"We will demonstrate the effects of these two technologies later.",
"Table 3 shows the results of our system and other state-of-the-art models on the MS-MARCO test set.",
"We adopt the official evaluation metrics, including ROUGE-L (Lin, 2004) and BLEU-1 (Papineni et al., 2002) .",
"As we can see, for both metrics, our single model outperforms all the other competing models with an evident margin, which is extremely hard considering the near-human per- formance.",
"If we ensemble the models trained with different random seeds and hyper-parameters, the results can be further improved and outperform the ensemble model in Tan et al.",
"(2017) , especially in terms of the BLEU-1.",
"Results on MS-MARCO Results on DuReader The results of our model and several baseline systems on the test set of DuReader are shown in Table 4 .",
"The BiDAF and Match-LSTM models are provided as two baseline systems (He et al., 2017) .",
"Based on BiDAF, as is described in Section 3.2, we tried a new paragraph selection strategy by employing a paragraph ranking (PR) model.",
"We can see that this paragraph ranking can boost the BiDAF baseline significantly.",
"Finally, we implement our system based on this new strategy, and our system (single model) achieves further improvement by a large margin.",
"Question: What is the difference between a mixed and pure culture Scores Answer Candidates: Boundary Content Verification [1] A culture is a society's total way of living and a society is a group .",
".",
".",
"1.0 × 10 −2 1.0 × 10 −1 1.1 × 10 −1 [2] The mixed economy is a balance between socialism and capitalism.",
"1.0 × 10 −4 4.0 × 10 −2 3.2 × 10 −2 [3] A pure culture is one in which only one kind of microbial species is .",
".",
".",
"5.5 × 10 −3 7.7 × 10 −2 1.2 × 10 −1 [4] A pure culture comprises a single species or strains.",
"A mixed .",
".",
".",
"2.7 × 10 −3 8.1 × 10 −2 1.3 × 10 −1 [5] A pure culture is a culture consisting of only one strain.",
"5.8 × 10 −4 7.9 × 10 −2 5.1 × 10 −2 [6] A pure culture is one in which only one kind of microbial species .",
".",
".",
"5.8 × 10 −3 9.1 × 10 −2 2.7 × 10 −1 .",
".",
".",
".",
".",
".",
".",
".",
".",
".",
".",
".",
"Analysis and Discussion Ablation Study To get better insight into our system, we conduct in-depth ablation study on the development set of MS-MARCO, which is shown in Table 5 .",
"Following Tan et al.",
"(2017) , we mainly focus on the ROUGE-L score that is averaged case by case.",
"We first evaluate the answer verification by ablating the cross-passage verification model so that the verification loss and verification score will not be used during training and testing.",
"Then we remove the content model in order to test the necessity of modeling the content of the answer.",
"Since we don't have the content scores, we use the boundary probabilities instead to compute the answer representation for verification.",
"Next, to show the benefits of joint training, we train the boundary model separately from the other two models.",
"Finally, we remove the yes/no classification in order to show the real improvement of our end-toend model compared with the baseline method that predicts the answer with only the boundary model.",
"From Table 5 , we can see that the answer verification makes a great contribution to the overall improvement, which confirms our hypothesis that cross-passage answer verification is useful for the multi-passage MRC.",
"For the ablation of the content model, we analyze that it will not only affect the content score itself, but also violate the verification model since the content probabilities are necessary for the answer representation, which will be further analyzed in Section 4.3.",
"Another discovery is that jointly training the three models can provide great benefits, which shows that the three tasks are actually closely related and can boost each other with shared representations at bottom layers.",
"At last, comparing our method with the baseline, we achieve an improvement of nearly 3 points without the yes/no classification.",
"This significant improvement proves the effectiveness of our approach.",
"Case Study To demonstrate how each module of our model takes effect when predicting the final answer, we conduct a case study in Table 6 with the same example that we discussed in Section 1.",
"For each answer candidate, we list three scores predicted by the boundary model, content model and verification model respectively.",
"On the one hand, we can see that these three scores generally have some relevance.",
"For example, the second candidate is given lowest scores by all the three models.",
"We analyze that this is because the models share the same encoding and matching layers at bottom level and this relevance guarantees that the content and verification models will not violate the boundary model too much.",
"On the other hand, we also see that the verification score can really make a difference here when the boundary model makes an incorrect decision among the confusing answer candidates ([1], [3], [4], [6] ).",
"Besides, as we expected, the verification model tends to give higher scores for those answers that have semantic commonality with each other ([3] , [4], [6] ), which are all valid answers in this case.",
"By multiplying the three scores, our model finally predicts the answer correctly.",
"Necessity of the Content Model In our model, we compute the answer representation based on the content probabilities predicted by a separate content model instead of directly using the boundary probabilities.",
"We argue that this content model is necessary for our answer verification process.",
"Figure 2 plots the predicted content probabilities as well as the boundary probabilities The noun charge unit has 1 sense : 1 .",
"a measure of the quantity of electricity -LRB-determined by the amount of an electric current and the time for which it flows -RRB-.",
"familiarity info : charge unit used as a noun is very rare .",
"start probability end probability content probability Figure 2 : The boundary probabilities and content probabilities for the words in a passage for a passage.",
"We can see that the boundary and content probabilities capture different aspects of the answer.",
"Since answer candidates usually have similar boundary words, if we compute the answer representation based on the boundary probabilities, it's difficult to model the real difference among different answer candidates.",
"On the contrary, with the content probabilities, we pay more attention to the content part of the answer, which can provide more distinguishable information for verifying the correct answer.",
"Furthermore, the content probabilities can also adjust the weights of the words within the answer span so that unimportant words (e.g.",
"\"and\" and \".\")",
"get lower weights in the final answer representation.",
"We believe that this refined representation is also good for the answer verification process.",
"Related Work Machine reading comprehension made rapid progress in recent years, especially for singlepassage MRC task, such as SQuAD (Rajpurkar et al., 2016) .",
"Mainstream studies (Seo et al., 2016; Wang and Jiang, 2016; Xiong et al., 2016) treat reading comprehension as extracting answer span from the given passage, which is usually achieved by predicting the start and end position of the answer.",
"We implement our boundary model similarly by employing the boundary-based pointer network (Wang and Jiang, 2016) .",
"Another inspiring work is from Wang et al.",
"(2017c) , where the authors propose to match the passage against itself so that the representation can aggregate evidence from the whole passage.",
"Our verification model adopts a similar idea.",
"However, we collect information across passages and our attention is based on the answer representation, which is much more efficient than attention over all passages.",
"For the model training, Xiong et al.",
"(2017) argues that the boundary loss encourages exact answers at the cost of penalizing overlapping answers.",
"Therefore they propose a mixed objective that incorporates rewards derived from word overlap.",
"Our joint training approach has a similar function.",
"By taking the content and verification loss into consideration, our model will give less loss for overlapping answers than those unmatched answers, and our loss function is totally differentiable.",
"Recently, we also see emerging interests in multi-passage MRC from both the academic (Dunn et al., 2017; Joshi et al., 2017) and industrial community (Nguyen et al., 2016; He et al., 2017) .",
"Early studies (Shen et al., 2017; Wang et al., 2017c) usually concat those passages and employ the same models designed for singlepassage MRC.",
"However, more and more latest studies start to design specific methods that can read multiple passages more effectively.",
"In the aspect of passage selection, Wang et al.",
"(2017a) introduced a pipelined approach that rank the passages first and then read the selected passages for answering questions.",
"Tan et al.",
"(2017) treats the passage ranking as an auxiliary task that can be trained jointly with the reading comprehension model.",
"Actually, the target of our answer verification is very similar to that of the passage selection, while we pay more attention to the answer content and the answer verification process.",
"Speaking of the answer verification, Wang et al.",
"(2017b) has a similar motivation to ours.",
"They attempt to aggregate the evidence from different passages and choose the final answer from n-best candidates.",
"However, they implement their idea as a separate reranking step after reading comprehension, while our answer verification is a component of the whole model that can be trained end-to-end.",
"Conclusion In this paper, we propose an end-to-end framework to tackle the multi-passage MRC task .",
"We creatively design three different modules in our model, which can find the answer boundary, model the answer content and conduct cross-passage answer verification respectively.",
"All these three modules can be trained with different forms of the answer labels and training them jointly can provide further improvement.",
"The experimental results demonstrate that our model outperforms the baseline models by a large margin and achieves the state-of-the-art performance on two challenging datasets, both of which are designed for MRC on real web data."
]
} | {
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.3",
"2.4",
"2.5",
"3",
"3.1",
"3.2",
"3.4",
"4.1",
"4.2",
"4.3",
"5",
"6"
],
"paper_header_content": [
"Introduction",
"Question and Passage Modeling",
"Answer Boundary Prediction",
"Answer Content Modeling",
"Cross-Passage Answer Verification",
"Joint Training and Prediction",
"Experiments",
"Datasets",
"Implementation Details",
"Results on DuReader",
"Ablation Study",
"Case Study",
"Necessity of the Content Model",
"Related Work",
"Conclusion"
]
} | GEM-SciDuet-train-28#paper-1035#slide-10 | Joint Training and Prediction | Finding the boundary of the answer
Predicting whether each word should be included in the answer
* Selecting the best answer from all the candidates
Lioint = Lpounaary + P1 content + B2L verify
Score = Shoundary X Scontent * Sverify | Finding the boundary of the answer
Predicting whether each word should be included in the answer
* Selecting the best answer from all the candidates
Lioint = Lpounaary + P1 content + B2L verify
Score = Shoundary X Scontent * Sverify | [] |
GEM-SciDuet-train-28#paper-1035#slide-11 | 1035 | Multi-Passage Machine Reading Comprehension with Cross-Passage Answer Verification | Machine reading comprehension (MRC) on real web data usually requires the machine to answer a question by analyzing multiple passages retrieved by search engine. Compared with MRC on a single passage, multi-passage MRC is more challenging, since we are likely to get multiple confusing answer candidates from different passages. To address this problem, we propose an end-to-end neural model that enables those answer candidates from different passages to verify each other based on their content representations. Specifically, we jointly train three modules that can predict the final answer based on three factors: the answer boundary, the answer content and the cross-passage answer verification. The experimental results show that our method outperforms the baseline by a large margin and achieves the state-of-the-art performance on the English MS-MARCO dataset and the Chinese DuReader dataset, both of which are designed for MRC in real-world settings. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231
],
"paper_content_text": [
"Introduction Machine reading comprehension (MRC), empowering computers with the ability to acquire knowledge and answer questions from textual data, is believed to be a crucial step in building a general intelligent agent (Chen et al., 2016) .",
"Recent years have seen rapid growth in the MRC community.",
"With the release of various datasets, the MRC task has evolved from the early cloze-style test (Hermann et al., 2015; Hill et al., 2015) to answer extraction from a single passage (Rajpurkar et al., 2016) and to the latest more complex question answering on web data (Nguyen et al., 2016; Dunn et al., 2017; He et al., 2017) .",
"Great efforts have also been made to develop models for these MRC tasks , especially for the answer extraction on single passage (Wang and Jiang, 2016; Seo et al., 2016; Pan et al., 2017) .",
"A significant milestone is that several MRC models have exceeded the performance of human annotators on the SQuAD dataset 1 (Rajpurkar et al., 2016 ).",
"However, this success on single Wikipedia passage is still not adequate, considering the ultimate goal of reading the whole web.",
"Therefore, several latest datasets (Nguyen et al., 2016; He et al., 2017; Dunn et al., 2017) attempt to design the MRC tasks in more realistic settings by involving search engines.",
"For each question, they use the search engine to retrieve multiple passages and the MRC models are required to read these passages in order to give the final answer.",
"One of the intrinsic challenges for such multipassage MRC is that since all the passages are question-related but usually independently written, it's probable that multiple confusing answer candidates (correct or incorrect) exist.",
"Table 1 shows an example from MS-MARCO.",
"We can see that all the answer candidates have semantic matching with the question while they are literally different and some of them are even incorrect.",
"As is shown by Jia and Liang (2017) , these confusing answer candidates could be quite difficult for MRC models to distinguish.",
"Therefore, special consideration is required for such multi-passage MRC problem.",
"In this paper, we propose to leverage the answer candidates from different passages to verify the final correct answer and rule out the noisy incorrect answers.",
"Our hypothesis is that the cor-Question: What is the difference between a mixed and pure culture?",
"Passages: [1] A culture is a society's total way of living and a society is a group that live in a defined territory and participate in common culture.",
"While the answer given is in essence true, societies originally form for the express purpose to enhance .",
".",
".",
"[2] .",
".",
".",
"There has been resurgence in the economic system known as capitalism during the past two decades.",
"4.",
"The mixed economy is a balance between socialism and capitalism.",
"As a result, some institutions are owned and maintained by .",
".",
".",
"[3] A pure culture is one in which only one kind of microbial species is found whereas in mixed culture two or more microbial species formed colonies.",
"Culture on the other hand, is the lifestyle that the people in the country .",
".",
".",
"[4] Best Answer: A pure culture comprises a single species or strains.",
"A mixed culture is taken from a source and may contain multiple strains or species.",
"A contaminated culture contains organisms that derived from some place .",
".",
".",
"[5] .",
".",
".",
"It will be at that time when we can truly obtain a pure culture.",
"A pure culture is a culture consisting of only one strain.",
"You can obtain a pure culture by picking out a small portion of the mixed culture .",
".",
".",
"[6] A pure culture is one in which only one kind of microbial species is found whereas in mixed culture two or more microbial species formed colonies.",
"A pure culture is a culture consisting of only one strain.",
".",
".",
".",
"· · · · · · Reference Answer: A pure culture is one in which only one kind of microbial species is found whereas in mixed culture two or more microbial species formed colonies.",
"rect answers could occur more frequently in those passages and usually share some commonalities, while incorrect answers are usually different from one another.",
"The example in Table 1 demonstrates this phenomenon.",
"We can see that the answer candidates extracted from the last four passages are all valid answers to the question and they are semantically similar to each other, while the answer candidates from the other two passages are incorrect and there is no supportive information from other passages.",
"As human beings usually compare the answer candidates from different sources to deduce the final answer, we hope that MRC model can also benefit from the cross-passage answer verification process.",
"The overall framework of our model is demonstrated in Figure 1 , which consists of three modules.",
"First, we follow the boundary-based MRC models (Seo et al., 2016; Wang and Jiang, 2016) to find an answer candidate for each passage by identifying the start and end position of the answer ( Figure 2) .",
"Second, we model the meanings of the answer candidates extracted from those passages and use the content scores to measure the quality of the candidates from a second perspective.",
"Third, we conduct the answer verification by enabling each answer candidate to attend to the other candidates based on their representations.",
"We hope that the answer candidates can collect supportive information from each other according to their semantic similarities and further decide whether each candidate is correct or not.",
"Therefore, the final answer is determined by three factors: the boundary, the content and the crosspassage answer verification.",
"The three steps are modeled using different modules, which can be jointly trained in our end-to-end framework.",
"We conduct extensive experiments on the MS-MARCO (Nguyen et al., 2016) and DuReader (He et al., 2017) datasets.",
"The results show that our answer verification MRC model outperforms the baseline models by a large margin and achieves the state-of-the-art performance on both datasets.",
"Figure 1 gives an overview of our multi-passage MRC model which is mainly composed of three modules including answer boundary prediction, answer content modeling and answer verification.",
"First of all, we need to model the question and passages.",
"Following Seo et al.",
"(2016) , we compute the question-aware representation for each passage (Section 2.1).",
"Based on this representation, we employ a Pointer Network (Vinyals et al., 2015) to predict the start and end position of the answer in the module of answer boundary prediction (Section 2.2).",
"At the same time, with the answer content model (Section 2.3), we estimate whether each word should be included in the answer and thus obtain the answer representations.",
"Next, in the answer verification module (Section 2.4), each answer candidate can attend to the other answer candidates to collect supportive information and we compute one score for each candidate Figure 1 : Overview of our method for multi-passage machine reading comprehension to indicate whether it is correct or not according to the verification.",
"The final answer is determined by not only the boundary but also the answer content and its verification score (Section 2.5).",
"Our Approach Question and Passage Modeling Given a question Q and a set of passages {P i } retrieved by search engines, our task is to find the best concise answer to the question.",
"First, we formally present the details of modeling the question and passages.",
"Encoding We first map each word into the vector space by concatenating its word embedding and sum of its character embeddings.",
"Then we employ bi-directional LSTMs (BiLSTM) to encode the question Q and passages {P i } as follows: u Q t = BiLSTM Q (u Q t−1 , [e Q Q-P Matching One essential step in MRC is to match the question with passages so that important information can be highlighted.",
"We use the Attention Flow Layer (Seo et al., 2016) to conduct the Q-P matching in two directions.",
"The similarity matrix S ∈ R |Q|×|P i | between the question and passage i is changed to a simpler version, where the similarity between the t th word in the question and the k th word in passage i is computed as: S t,k = u Q t · u P i k (3) Then the context-to-question attention and question-to-context attention is applied strictly following Seo et al.",
"(2016) to obtain the questionaware passage representation {ũ P i t }.",
"We do not give the details here due to space limitation.",
"Next, another BiLSTM is applied in order to fuse the contextual information and get the new representation for each word in the passage, which is regarded as the match output: v P i t = BiLSTM M (v P i t−1 ,ũ P i t ) (4) Based on the passage representations, we introduce the three main modules of our model.",
"Answer Boundary Prediction To extract the answer span from passages, mainstream studies try to locate the boundary of the answer, which is called boundary model.",
"Following (Wang and Jiang, 2016) , we employ Pointer Network (Vinyals et al., 2015) to compute the probability of each word to be the start or end position of the span: g t k = w a 1 tanh(W a 2 [v P k , h a t−1 ]) (5) α t k = exp(g t k )/ |P| j=1 exp(g t j ) (6) c t = |P| k=1 α t k v P k (7) h a t = LSTM(h a t−1 , c t ) (8) By utilizing the attention weights, the probability of the k th word in the passage to be the start and end position of the answer is obtained as α 1 k and α 2 k .",
"It should be noted that the pointer network is applied to the concatenation of all passages, which is denoted as P so that the probabilities are comparable across passages.",
"This boundary model can be trained by minimizing the negative log probabilities of the true start and end indices: L boundary = − 1 N N i=1 (log α 1 y 1 i + log α 2 y 2 i ) (9) where N is the number of samples in the dataset and y 1 i , y 2 i are the gold start and end positions.",
"Answer Content Modeling Previous work employs the boundary model to find the text span with the maximum boundary score as the final answer.",
"However, in our context, besides locating the answer candidates, we also need to model their meanings in order to conduct the verification.",
"An intuitive method is to compute the representation of the answer candidates separately after extracting them, but it could be hard to train such model end-to-end.",
"Here, we propose a novel method that can obtain the representation of the answer candidates based on probabilities.",
"Specifically, we change the output layer of the classic MRC model.",
"Besides predicting the boundary probabilities for the words in the passages, we also predict whether each word should be included in the content of the answer.",
"The content probability of the k th word is computed as: p c k = sigmoid(w c 1 ReLU(W c 2 v P i k )) (10) Training this content model is also quite intuitive.",
"We transform the boundary labels into a continuous segment, which means the words within the answer span will be labeled as 1 and other words will be labeled as 0.",
"In this way, we define the loss function as the averaged cross entropy: L content = − 1 N 1 |P| N i=1 |P | j=1 [y c k log p c k + (1 − y c k ) log(1 − p c k )] (11) The content probabilities provide another view to measure the quality of the answer in addition to the boundary.",
"Moreover, with these probabilities, we can represent the answer from passage i as a weighted sum of all the word embeddings in this passage: r A i = 1 |P i | |P i | k=1 p c k [e P i k , c P i k ] (12) Cross-Passage Answer Verification The boundary model and the content model focus on extracting and modeling the answer within a single passage respectively, with little consideration of the cross-passage information.",
"However, as is discussed in Section 1, there could be multiple answer candidates from different passages and some of them may mislead the MRC model to make an incorrect prediction.",
"It's necessary to aggregate the information from different passages and choose the best one from those candidates.",
"Therefore, we propose a method to enable the answer candidates to exchange information and verify each other through the cross-passage answer verification process.",
"Given the representation of the answer candidates from all passages {r A i }, each answer candidate then attends to other candidates to collect supportive information via attention mechanism: s i,j = 0, if i = j, r A i · r A j , otherwise (13) α i,j = exp(s i,j )/ n k=1 exp(s i,k ) (14) r A i = n j=1 α i,j r A j (15) Herer A i is the collected verification information from other passages based on the attention weights.",
"Then we pass it together with the original representation r A i to a fully connected layer: g v i = w v [r A i ,r A i , r A i r A i ] (16) We further normalize these scores over all passages to get the verification score for answer candidate A i : p v i = exp(g v i )/ n j=1 exp(g v j ) (17) In order to train this verification model, we take the answer from the gold passage as the gold answer.",
"And the loss function can be formulated as the negative log probability of the correct answer: L verif y = − 1 N N i=1 log p v y v i (18) where y v i is the index of the correct answer in all the answer candidates of the i th instance .",
"Joint Training and Prediction As is described above, we define three objectives for the reading comprehension model over multiple passages: 1. finding the boundary of the answer; 2. predicting whether each word should be included in the content; 3. selecting the best answer via cross-passage answer verification.",
"According to our design, these three tasks can share the same embedding, encoding and matching layers.",
"Therefore, we propose to train them together as multi-task learning (Ruder, 2017) .",
"The joint objective function is formulated as follows: L = L boundary + β 1 L content + β 2 L verif y (19) where β 1 and β 2 are two hyper-parameters that control the weights of those tasks.",
"When predicting the final answer, we take the boundary score, content score and verification score into consideration.",
"We first extract the answer candidate A i that has the maximum boundary score from each passage i.",
"This boundary score is computed as the product of the start and end probability of the answer span.",
"Then for each answer candidate A i , we average the content probabilities of all its words as the content score of A i .",
"And we can also predict the verification score for A i using the verification model.",
"Therefore, the final answer can be selected from all the answer candidates according to the product of these three scores.",
"Experiments To verify the effectiveness of our model on multipassage machine reading comprehension, we conduct experiments on the MS-MARCO (Nguyen et al., 2016) and DuReader (He et al., 2017) datasets.",
"Our method achieves the state-of-the-art performance on both datasets.",
"Datasets We choose the MS-MARCO and DuReader datasets to test our method, since both of them are One prerequisite for answer verification is that there should be multiple correct answers so that they can verify each other.",
"Both the MS-MARCO and DuReader datasets require the human annotators to generate multiple answers if possible.",
"Table 2 shows the proportion of questions that have multiple answers.",
"However, the same answer that occurs many times is treated as one single answer here.",
"Therefore, we also report the proportion of questions that have multiple answer spans to match with the human-generated answers.",
"A span is taken as valid if it can achieve F1 score larger than 0.7 compared with any reference answer.",
"From these statistics, we can see that the phenomenon of multiple answers is quite common for both MS-MARCO and DuReader.",
"These answers will provide strong signals for answer verification if we can leverage them properly.",
"Implementation Details For MS-MARCO, we preprocess the corpus with the reversible tokenizer from Stanford CoreNLP and we choose the span that achieves the highest ROUGE-L score with the reference answers as the gold span for training.",
"We employ the 300-D pre-trained Glove embeddings (Pennington et al., 2014) and keep it fixed during training.",
"The character embeddings are randomly initialized with its dimension as 30.",
"For DuReader, we follow the preprocessing described in He et al.",
"(2017) .",
"We tune the hyper-parameters according to the Model ROUGE-L BLEU-1 FastQA Ext (Weissenborn et al., 2017) 33.67 33.93 Prediction (Wang and Jiang, 2016) 37.33 40.72 ReasoNet (Shen et al., 2017) 38.81 39.86 R-Net (Wang et al., 2017c) 42.89 42.22 S-Net (Tan et al., 2017) 45 Two simple yet effective technologies are employed to improve the final performance on these two datasets respectively.",
"For MS-MARCO, approximately 8% questions have the answers as Yes or No, which usually cannot be solved by extractive approach (Tan et al., 2017) .",
"We address this problem by training a simple Yes/No classifier for those questions with certain patterns (e.g., starting with \"is\").",
"Concretely, we simply change the output layer of the basic boundary model so that it can predict whether the answer is \"Yes\" or \"No\".",
"For DuReader, the retrieved document usually contains a large number of paragraphs that cannot be fed into MRC models directly (He et al., 2017) .",
"The original paper employs a simple a simple heuristic strategy to select a representative paragraph for each document, while we train a paragraph ranking model for this.",
"We will demonstrate the effects of these two technologies later.",
"Table 3 shows the results of our system and other state-of-the-art models on the MS-MARCO test set.",
"We adopt the official evaluation metrics, including ROUGE-L (Lin, 2004) and BLEU-1 (Papineni et al., 2002) .",
"As we can see, for both metrics, our single model outperforms all the other competing models with an evident margin, which is extremely hard considering the near-human per- formance.",
"If we ensemble the models trained with different random seeds and hyper-parameters, the results can be further improved and outperform the ensemble model in Tan et al.",
"(2017) , especially in terms of the BLEU-1.",
"Results on MS-MARCO Results on DuReader The results of our model and several baseline systems on the test set of DuReader are shown in Table 4 .",
"The BiDAF and Match-LSTM models are provided as two baseline systems (He et al., 2017) .",
"Based on BiDAF, as is described in Section 3.2, we tried a new paragraph selection strategy by employing a paragraph ranking (PR) model.",
"We can see that this paragraph ranking can boost the BiDAF baseline significantly.",
"Finally, we implement our system based on this new strategy, and our system (single model) achieves further improvement by a large margin.",
"Question: What is the difference between a mixed and pure culture Scores Answer Candidates: Boundary Content Verification [1] A culture is a society's total way of living and a society is a group .",
".",
".",
"1.0 × 10 −2 1.0 × 10 −1 1.1 × 10 −1 [2] The mixed economy is a balance between socialism and capitalism.",
"1.0 × 10 −4 4.0 × 10 −2 3.2 × 10 −2 [3] A pure culture is one in which only one kind of microbial species is .",
".",
".",
"5.5 × 10 −3 7.7 × 10 −2 1.2 × 10 −1 [4] A pure culture comprises a single species or strains.",
"A mixed .",
".",
".",
"2.7 × 10 −3 8.1 × 10 −2 1.3 × 10 −1 [5] A pure culture is a culture consisting of only one strain.",
"5.8 × 10 −4 7.9 × 10 −2 5.1 × 10 −2 [6] A pure culture is one in which only one kind of microbial species .",
".",
".",
"5.8 × 10 −3 9.1 × 10 −2 2.7 × 10 −1 .",
".",
".",
".",
".",
".",
".",
".",
".",
".",
".",
".",
"Analysis and Discussion Ablation Study To get better insight into our system, we conduct in-depth ablation study on the development set of MS-MARCO, which is shown in Table 5 .",
"Following Tan et al.",
"(2017) , we mainly focus on the ROUGE-L score that is averaged case by case.",
"We first evaluate the answer verification by ablating the cross-passage verification model so that the verification loss and verification score will not be used during training and testing.",
"Then we remove the content model in order to test the necessity of modeling the content of the answer.",
"Since we don't have the content scores, we use the boundary probabilities instead to compute the answer representation for verification.",
"Next, to show the benefits of joint training, we train the boundary model separately from the other two models.",
"Finally, we remove the yes/no classification in order to show the real improvement of our end-toend model compared with the baseline method that predicts the answer with only the boundary model.",
"From Table 5 , we can see that the answer verification makes a great contribution to the overall improvement, which confirms our hypothesis that cross-passage answer verification is useful for the multi-passage MRC.",
"For the ablation of the content model, we analyze that it will not only affect the content score itself, but also violate the verification model since the content probabilities are necessary for the answer representation, which will be further analyzed in Section 4.3.",
"Another discovery is that jointly training the three models can provide great benefits, which shows that the three tasks are actually closely related and can boost each other with shared representations at bottom layers.",
"At last, comparing our method with the baseline, we achieve an improvement of nearly 3 points without the yes/no classification.",
"This significant improvement proves the effectiveness of our approach.",
"Case Study To demonstrate how each module of our model takes effect when predicting the final answer, we conduct a case study in Table 6 with the same example that we discussed in Section 1.",
"For each answer candidate, we list three scores predicted by the boundary model, content model and verification model respectively.",
"On the one hand, we can see that these three scores generally have some relevance.",
"For example, the second candidate is given lowest scores by all the three models.",
"We analyze that this is because the models share the same encoding and matching layers at bottom level and this relevance guarantees that the content and verification models will not violate the boundary model too much.",
"On the other hand, we also see that the verification score can really make a difference here when the boundary model makes an incorrect decision among the confusing answer candidates ([1], [3], [4], [6] ).",
"Besides, as we expected, the verification model tends to give higher scores for those answers that have semantic commonality with each other ([3] , [4], [6] ), which are all valid answers in this case.",
"By multiplying the three scores, our model finally predicts the answer correctly.",
"Necessity of the Content Model In our model, we compute the answer representation based on the content probabilities predicted by a separate content model instead of directly using the boundary probabilities.",
"We argue that this content model is necessary for our answer verification process.",
"Figure 2 plots the predicted content probabilities as well as the boundary probabilities The noun charge unit has 1 sense : 1 .",
"a measure of the quantity of electricity -LRB-determined by the amount of an electric current and the time for which it flows -RRB-.",
"familiarity info : charge unit used as a noun is very rare .",
"start probability end probability content probability Figure 2 : The boundary probabilities and content probabilities for the words in a passage for a passage.",
"We can see that the boundary and content probabilities capture different aspects of the answer.",
"Since answer candidates usually have similar boundary words, if we compute the answer representation based on the boundary probabilities, it's difficult to model the real difference among different answer candidates.",
"On the contrary, with the content probabilities, we pay more attention to the content part of the answer, which can provide more distinguishable information for verifying the correct answer.",
"Furthermore, the content probabilities can also adjust the weights of the words within the answer span so that unimportant words (e.g.",
"\"and\" and \".\")",
"get lower weights in the final answer representation.",
"We believe that this refined representation is also good for the answer verification process.",
"Related Work Machine reading comprehension made rapid progress in recent years, especially for singlepassage MRC task, such as SQuAD (Rajpurkar et al., 2016) .",
"Mainstream studies (Seo et al., 2016; Wang and Jiang, 2016; Xiong et al., 2016) treat reading comprehension as extracting answer span from the given passage, which is usually achieved by predicting the start and end position of the answer.",
"We implement our boundary model similarly by employing the boundary-based pointer network (Wang and Jiang, 2016) .",
"Another inspiring work is from Wang et al.",
"(2017c) , where the authors propose to match the passage against itself so that the representation can aggregate evidence from the whole passage.",
"Our verification model adopts a similar idea.",
"However, we collect information across passages and our attention is based on the answer representation, which is much more efficient than attention over all passages.",
"For the model training, Xiong et al.",
"(2017) argues that the boundary loss encourages exact answers at the cost of penalizing overlapping answers.",
"Therefore they propose a mixed objective that incorporates rewards derived from word overlap.",
"Our joint training approach has a similar function.",
"By taking the content and verification loss into consideration, our model will give less loss for overlapping answers than those unmatched answers, and our loss function is totally differentiable.",
"Recently, we also see emerging interests in multi-passage MRC from both the academic (Dunn et al., 2017; Joshi et al., 2017) and industrial community (Nguyen et al., 2016; He et al., 2017) .",
"Early studies (Shen et al., 2017; Wang et al., 2017c) usually concat those passages and employ the same models designed for singlepassage MRC.",
"However, more and more latest studies start to design specific methods that can read multiple passages more effectively.",
"In the aspect of passage selection, Wang et al.",
"(2017a) introduced a pipelined approach that rank the passages first and then read the selected passages for answering questions.",
"Tan et al.",
"(2017) treats the passage ranking as an auxiliary task that can be trained jointly with the reading comprehension model.",
"Actually, the target of our answer verification is very similar to that of the passage selection, while we pay more attention to the answer content and the answer verification process.",
"Speaking of the answer verification, Wang et al.",
"(2017b) has a similar motivation to ours.",
"They attempt to aggregate the evidence from different passages and choose the final answer from n-best candidates.",
"However, they implement their idea as a separate reranking step after reading comprehension, while our answer verification is a component of the whole model that can be trained end-to-end.",
"Conclusion In this paper, we propose an end-to-end framework to tackle the multi-passage MRC task .",
"We creatively design three different modules in our model, which can find the answer boundary, model the answer content and conduct cross-passage answer verification respectively.",
"All these three modules can be trained with different forms of the answer labels and training them jointly can provide further improvement.",
"The experimental results demonstrate that our model outperforms the baseline models by a large margin and achieves the state-of-the-art performance on two challenging datasets, both of which are designed for MRC on real web data."
]
} | {
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.3",
"2.4",
"2.5",
"3",
"3.1",
"3.2",
"3.4",
"4.1",
"4.2",
"4.3",
"5",
"6"
],
"paper_header_content": [
"Introduction",
"Question and Passage Modeling",
"Answer Boundary Prediction",
"Answer Content Modeling",
"Cross-Passage Answer Verification",
"Joint Training and Prediction",
"Experiments",
"Datasets",
"Implementation Details",
"Results on DuReader",
"Ablation Study",
"Case Study",
"Necessity of the Content Model",
"Related Work",
"Conclusion"
]
} | GEM-SciDuet-train-28#paper-1035#slide-11 | Experiments Setup | * Datasets: MS-MARCOF! and DuReader!7!:
eee Search CoV erst Ce MUU) CoV erst Ce MUU) ai aay ala Multi Annotated Answers Multi Answer Spans
eee Search Questions with Questions with
Hyper-parameters (tuned on the dev set):
Glove Random cies aan a | * Datasets: MS-MARCOF! and DuReader!7!:
eee Search CoV erst Ce MUU) CoV erst Ce MUU) ai aay ala Multi Annotated Answers Multi Answer Spans
eee Search Questions with Questions with
Hyper-parameters (tuned on the dev set):
Glove Random cies aan a | [] |
GEM-SciDuet-train-28#paper-1035#slide-12 | 1035 | Multi-Passage Machine Reading Comprehension with Cross-Passage Answer Verification | Machine reading comprehension (MRC) on real web data usually requires the machine to answer a question by analyzing multiple passages retrieved by search engine. Compared with MRC on a single passage, multi-passage MRC is more challenging, since we are likely to get multiple confusing answer candidates from different passages. To address this problem, we propose an end-to-end neural model that enables those answer candidates from different passages to verify each other based on their content representations. Specifically, we jointly train three modules that can predict the final answer based on three factors: the answer boundary, the answer content and the cross-passage answer verification. The experimental results show that our method outperforms the baseline by a large margin and achieves the state-of-the-art performance on the English MS-MARCO dataset and the Chinese DuReader dataset, both of which are designed for MRC in real-world settings. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231
],
"paper_content_text": [
"Introduction Machine reading comprehension (MRC), empowering computers with the ability to acquire knowledge and answer questions from textual data, is believed to be a crucial step in building a general intelligent agent (Chen et al., 2016) .",
"Recent years have seen rapid growth in the MRC community.",
"With the release of various datasets, the MRC task has evolved from the early cloze-style test (Hermann et al., 2015; Hill et al., 2015) to answer extraction from a single passage (Rajpurkar et al., 2016) and to the latest more complex question answering on web data (Nguyen et al., 2016; Dunn et al., 2017; He et al., 2017) .",
"Great efforts have also been made to develop models for these MRC tasks , especially for the answer extraction on single passage (Wang and Jiang, 2016; Seo et al., 2016; Pan et al., 2017) .",
"A significant milestone is that several MRC models have exceeded the performance of human annotators on the SQuAD dataset 1 (Rajpurkar et al., 2016 ).",
"However, this success on single Wikipedia passage is still not adequate, considering the ultimate goal of reading the whole web.",
"Therefore, several latest datasets (Nguyen et al., 2016; He et al., 2017; Dunn et al., 2017) attempt to design the MRC tasks in more realistic settings by involving search engines.",
"For each question, they use the search engine to retrieve multiple passages and the MRC models are required to read these passages in order to give the final answer.",
"One of the intrinsic challenges for such multipassage MRC is that since all the passages are question-related but usually independently written, it's probable that multiple confusing answer candidates (correct or incorrect) exist.",
"Table 1 shows an example from MS-MARCO.",
"We can see that all the answer candidates have semantic matching with the question while they are literally different and some of them are even incorrect.",
"As is shown by Jia and Liang (2017) , these confusing answer candidates could be quite difficult for MRC models to distinguish.",
"Therefore, special consideration is required for such multi-passage MRC problem.",
"In this paper, we propose to leverage the answer candidates from different passages to verify the final correct answer and rule out the noisy incorrect answers.",
"Our hypothesis is that the cor-Question: What is the difference between a mixed and pure culture?",
"Passages: [1] A culture is a society's total way of living and a society is a group that live in a defined territory and participate in common culture.",
"While the answer given is in essence true, societies originally form for the express purpose to enhance .",
".",
".",
"[2] .",
".",
".",
"There has been resurgence in the economic system known as capitalism during the past two decades.",
"4.",
"The mixed economy is a balance between socialism and capitalism.",
"As a result, some institutions are owned and maintained by .",
".",
".",
"[3] A pure culture is one in which only one kind of microbial species is found whereas in mixed culture two or more microbial species formed colonies.",
"Culture on the other hand, is the lifestyle that the people in the country .",
".",
".",
"[4] Best Answer: A pure culture comprises a single species or strains.",
"A mixed culture is taken from a source and may contain multiple strains or species.",
"A contaminated culture contains organisms that derived from some place .",
".",
".",
"[5] .",
".",
".",
"It will be at that time when we can truly obtain a pure culture.",
"A pure culture is a culture consisting of only one strain.",
"You can obtain a pure culture by picking out a small portion of the mixed culture .",
".",
".",
"[6] A pure culture is one in which only one kind of microbial species is found whereas in mixed culture two or more microbial species formed colonies.",
"A pure culture is a culture consisting of only one strain.",
".",
".",
".",
"· · · · · · Reference Answer: A pure culture is one in which only one kind of microbial species is found whereas in mixed culture two or more microbial species formed colonies.",
"rect answers could occur more frequently in those passages and usually share some commonalities, while incorrect answers are usually different from one another.",
"The example in Table 1 demonstrates this phenomenon.",
"We can see that the answer candidates extracted from the last four passages are all valid answers to the question and they are semantically similar to each other, while the answer candidates from the other two passages are incorrect and there is no supportive information from other passages.",
"As human beings usually compare the answer candidates from different sources to deduce the final answer, we hope that MRC model can also benefit from the cross-passage answer verification process.",
"The overall framework of our model is demonstrated in Figure 1 , which consists of three modules.",
"First, we follow the boundary-based MRC models (Seo et al., 2016; Wang and Jiang, 2016) to find an answer candidate for each passage by identifying the start and end position of the answer ( Figure 2) .",
"Second, we model the meanings of the answer candidates extracted from those passages and use the content scores to measure the quality of the candidates from a second perspective.",
"Third, we conduct the answer verification by enabling each answer candidate to attend to the other candidates based on their representations.",
"We hope that the answer candidates can collect supportive information from each other according to their semantic similarities and further decide whether each candidate is correct or not.",
"Therefore, the final answer is determined by three factors: the boundary, the content and the crosspassage answer verification.",
"The three steps are modeled using different modules, which can be jointly trained in our end-to-end framework.",
"We conduct extensive experiments on the MS-MARCO (Nguyen et al., 2016) and DuReader (He et al., 2017) datasets.",
"The results show that our answer verification MRC model outperforms the baseline models by a large margin and achieves the state-of-the-art performance on both datasets.",
"Figure 1 gives an overview of our multi-passage MRC model which is mainly composed of three modules including answer boundary prediction, answer content modeling and answer verification.",
"First of all, we need to model the question and passages.",
"Following Seo et al.",
"(2016) , we compute the question-aware representation for each passage (Section 2.1).",
"Based on this representation, we employ a Pointer Network (Vinyals et al., 2015) to predict the start and end position of the answer in the module of answer boundary prediction (Section 2.2).",
"At the same time, with the answer content model (Section 2.3), we estimate whether each word should be included in the answer and thus obtain the answer representations.",
"Next, in the answer verification module (Section 2.4), each answer candidate can attend to the other answer candidates to collect supportive information and we compute one score for each candidate Figure 1 : Overview of our method for multi-passage machine reading comprehension to indicate whether it is correct or not according to the verification.",
"The final answer is determined by not only the boundary but also the answer content and its verification score (Section 2.5).",
"Our Approach Question and Passage Modeling Given a question Q and a set of passages {P i } retrieved by search engines, our task is to find the best concise answer to the question.",
"First, we formally present the details of modeling the question and passages.",
"Encoding We first map each word into the vector space by concatenating its word embedding and sum of its character embeddings.",
"Then we employ bi-directional LSTMs (BiLSTM) to encode the question Q and passages {P i } as follows: u Q t = BiLSTM Q (u Q t−1 , [e Q Q-P Matching One essential step in MRC is to match the question with passages so that important information can be highlighted.",
"We use the Attention Flow Layer (Seo et al., 2016) to conduct the Q-P matching in two directions.",
"The similarity matrix S ∈ R |Q|×|P i | between the question and passage i is changed to a simpler version, where the similarity between the t th word in the question and the k th word in passage i is computed as: S t,k = u Q t · u P i k (3) Then the context-to-question attention and question-to-context attention is applied strictly following Seo et al.",
"(2016) to obtain the questionaware passage representation {ũ P i t }.",
"We do not give the details here due to space limitation.",
"Next, another BiLSTM is applied in order to fuse the contextual information and get the new representation for each word in the passage, which is regarded as the match output: v P i t = BiLSTM M (v P i t−1 ,ũ P i t ) (4) Based on the passage representations, we introduce the three main modules of our model.",
"Answer Boundary Prediction To extract the answer span from passages, mainstream studies try to locate the boundary of the answer, which is called boundary model.",
"Following (Wang and Jiang, 2016) , we employ Pointer Network (Vinyals et al., 2015) to compute the probability of each word to be the start or end position of the span: g t k = w a 1 tanh(W a 2 [v P k , h a t−1 ]) (5) α t k = exp(g t k )/ |P| j=1 exp(g t j ) (6) c t = |P| k=1 α t k v P k (7) h a t = LSTM(h a t−1 , c t ) (8) By utilizing the attention weights, the probability of the k th word in the passage to be the start and end position of the answer is obtained as α 1 k and α 2 k .",
"It should be noted that the pointer network is applied to the concatenation of all passages, which is denoted as P so that the probabilities are comparable across passages.",
"This boundary model can be trained by minimizing the negative log probabilities of the true start and end indices: L boundary = − 1 N N i=1 (log α 1 y 1 i + log α 2 y 2 i ) (9) where N is the number of samples in the dataset and y 1 i , y 2 i are the gold start and end positions.",
"Answer Content Modeling Previous work employs the boundary model to find the text span with the maximum boundary score as the final answer.",
"However, in our context, besides locating the answer candidates, we also need to model their meanings in order to conduct the verification.",
"An intuitive method is to compute the representation of the answer candidates separately after extracting them, but it could be hard to train such model end-to-end.",
"Here, we propose a novel method that can obtain the representation of the answer candidates based on probabilities.",
"Specifically, we change the output layer of the classic MRC model.",
"Besides predicting the boundary probabilities for the words in the passages, we also predict whether each word should be included in the content of the answer.",
"The content probability of the k th word is computed as: p c k = sigmoid(w c 1 ReLU(W c 2 v P i k )) (10) Training this content model is also quite intuitive.",
"We transform the boundary labels into a continuous segment, which means the words within the answer span will be labeled as 1 and other words will be labeled as 0.",
"In this way, we define the loss function as the averaged cross entropy: L content = − 1 N 1 |P| N i=1 |P | j=1 [y c k log p c k + (1 − y c k ) log(1 − p c k )] (11) The content probabilities provide another view to measure the quality of the answer in addition to the boundary.",
"Moreover, with these probabilities, we can represent the answer from passage i as a weighted sum of all the word embeddings in this passage: r A i = 1 |P i | |P i | k=1 p c k [e P i k , c P i k ] (12) Cross-Passage Answer Verification The boundary model and the content model focus on extracting and modeling the answer within a single passage respectively, with little consideration of the cross-passage information.",
"However, as is discussed in Section 1, there could be multiple answer candidates from different passages and some of them may mislead the MRC model to make an incorrect prediction.",
"It's necessary to aggregate the information from different passages and choose the best one from those candidates.",
"Therefore, we propose a method to enable the answer candidates to exchange information and verify each other through the cross-passage answer verification process.",
"Given the representation of the answer candidates from all passages {r A i }, each answer candidate then attends to other candidates to collect supportive information via attention mechanism: s i,j = 0, if i = j, r A i · r A j , otherwise (13) α i,j = exp(s i,j )/ n k=1 exp(s i,k ) (14) r A i = n j=1 α i,j r A j (15) Herer A i is the collected verification information from other passages based on the attention weights.",
"Then we pass it together with the original representation r A i to a fully connected layer: g v i = w v [r A i ,r A i , r A i r A i ] (16) We further normalize these scores over all passages to get the verification score for answer candidate A i : p v i = exp(g v i )/ n j=1 exp(g v j ) (17) In order to train this verification model, we take the answer from the gold passage as the gold answer.",
"And the loss function can be formulated as the negative log probability of the correct answer: L verif y = − 1 N N i=1 log p v y v i (18) where y v i is the index of the correct answer in all the answer candidates of the i th instance .",
"Joint Training and Prediction As is described above, we define three objectives for the reading comprehension model over multiple passages: 1. finding the boundary of the answer; 2. predicting whether each word should be included in the content; 3. selecting the best answer via cross-passage answer verification.",
"According to our design, these three tasks can share the same embedding, encoding and matching layers.",
"Therefore, we propose to train them together as multi-task learning (Ruder, 2017) .",
"The joint objective function is formulated as follows: L = L boundary + β 1 L content + β 2 L verif y (19) where β 1 and β 2 are two hyper-parameters that control the weights of those tasks.",
"When predicting the final answer, we take the boundary score, content score and verification score into consideration.",
"We first extract the answer candidate A i that has the maximum boundary score from each passage i.",
"This boundary score is computed as the product of the start and end probability of the answer span.",
"Then for each answer candidate A i , we average the content probabilities of all its words as the content score of A i .",
"And we can also predict the verification score for A i using the verification model.",
"Therefore, the final answer can be selected from all the answer candidates according to the product of these three scores.",
"Experiments To verify the effectiveness of our model on multipassage machine reading comprehension, we conduct experiments on the MS-MARCO (Nguyen et al., 2016) and DuReader (He et al., 2017) datasets.",
"Our method achieves the state-of-the-art performance on both datasets.",
"Datasets We choose the MS-MARCO and DuReader datasets to test our method, since both of them are One prerequisite for answer verification is that there should be multiple correct answers so that they can verify each other.",
"Both the MS-MARCO and DuReader datasets require the human annotators to generate multiple answers if possible.",
"Table 2 shows the proportion of questions that have multiple answers.",
"However, the same answer that occurs many times is treated as one single answer here.",
"Therefore, we also report the proportion of questions that have multiple answer spans to match with the human-generated answers.",
"A span is taken as valid if it can achieve F1 score larger than 0.7 compared with any reference answer.",
"From these statistics, we can see that the phenomenon of multiple answers is quite common for both MS-MARCO and DuReader.",
"These answers will provide strong signals for answer verification if we can leverage them properly.",
"Implementation Details For MS-MARCO, we preprocess the corpus with the reversible tokenizer from Stanford CoreNLP and we choose the span that achieves the highest ROUGE-L score with the reference answers as the gold span for training.",
"We employ the 300-D pre-trained Glove embeddings (Pennington et al., 2014) and keep it fixed during training.",
"The character embeddings are randomly initialized with its dimension as 30.",
"For DuReader, we follow the preprocessing described in He et al.",
"(2017) .",
"We tune the hyper-parameters according to the Model ROUGE-L BLEU-1 FastQA Ext (Weissenborn et al., 2017) 33.67 33.93 Prediction (Wang and Jiang, 2016) 37.33 40.72 ReasoNet (Shen et al., 2017) 38.81 39.86 R-Net (Wang et al., 2017c) 42.89 42.22 S-Net (Tan et al., 2017) 45 Two simple yet effective technologies are employed to improve the final performance on these two datasets respectively.",
"For MS-MARCO, approximately 8% questions have the answers as Yes or No, which usually cannot be solved by extractive approach (Tan et al., 2017) .",
"We address this problem by training a simple Yes/No classifier for those questions with certain patterns (e.g., starting with \"is\").",
"Concretely, we simply change the output layer of the basic boundary model so that it can predict whether the answer is \"Yes\" or \"No\".",
"For DuReader, the retrieved document usually contains a large number of paragraphs that cannot be fed into MRC models directly (He et al., 2017) .",
"The original paper employs a simple a simple heuristic strategy to select a representative paragraph for each document, while we train a paragraph ranking model for this.",
"We will demonstrate the effects of these two technologies later.",
"Table 3 shows the results of our system and other state-of-the-art models on the MS-MARCO test set.",
"We adopt the official evaluation metrics, including ROUGE-L (Lin, 2004) and BLEU-1 (Papineni et al., 2002) .",
"As we can see, for both metrics, our single model outperforms all the other competing models with an evident margin, which is extremely hard considering the near-human per- formance.",
"If we ensemble the models trained with different random seeds and hyper-parameters, the results can be further improved and outperform the ensemble model in Tan et al.",
"(2017) , especially in terms of the BLEU-1.",
"Results on MS-MARCO Results on DuReader The results of our model and several baseline systems on the test set of DuReader are shown in Table 4 .",
"The BiDAF and Match-LSTM models are provided as two baseline systems (He et al., 2017) .",
"Based on BiDAF, as is described in Section 3.2, we tried a new paragraph selection strategy by employing a paragraph ranking (PR) model.",
"We can see that this paragraph ranking can boost the BiDAF baseline significantly.",
"Finally, we implement our system based on this new strategy, and our system (single model) achieves further improvement by a large margin.",
"Question: What is the difference between a mixed and pure culture Scores Answer Candidates: Boundary Content Verification [1] A culture is a society's total way of living and a society is a group .",
".",
".",
"1.0 × 10 −2 1.0 × 10 −1 1.1 × 10 −1 [2] The mixed economy is a balance between socialism and capitalism.",
"1.0 × 10 −4 4.0 × 10 −2 3.2 × 10 −2 [3] A pure culture is one in which only one kind of microbial species is .",
".",
".",
"5.5 × 10 −3 7.7 × 10 −2 1.2 × 10 −1 [4] A pure culture comprises a single species or strains.",
"A mixed .",
".",
".",
"2.7 × 10 −3 8.1 × 10 −2 1.3 × 10 −1 [5] A pure culture is a culture consisting of only one strain.",
"5.8 × 10 −4 7.9 × 10 −2 5.1 × 10 −2 [6] A pure culture is one in which only one kind of microbial species .",
".",
".",
"5.8 × 10 −3 9.1 × 10 −2 2.7 × 10 −1 .",
".",
".",
".",
".",
".",
".",
".",
".",
".",
".",
".",
"Analysis and Discussion Ablation Study To get better insight into our system, we conduct in-depth ablation study on the development set of MS-MARCO, which is shown in Table 5 .",
"Following Tan et al.",
"(2017) , we mainly focus on the ROUGE-L score that is averaged case by case.",
"We first evaluate the answer verification by ablating the cross-passage verification model so that the verification loss and verification score will not be used during training and testing.",
"Then we remove the content model in order to test the necessity of modeling the content of the answer.",
"Since we don't have the content scores, we use the boundary probabilities instead to compute the answer representation for verification.",
"Next, to show the benefits of joint training, we train the boundary model separately from the other two models.",
"Finally, we remove the yes/no classification in order to show the real improvement of our end-toend model compared with the baseline method that predicts the answer with only the boundary model.",
"From Table 5 , we can see that the answer verification makes a great contribution to the overall improvement, which confirms our hypothesis that cross-passage answer verification is useful for the multi-passage MRC.",
"For the ablation of the content model, we analyze that it will not only affect the content score itself, but also violate the verification model since the content probabilities are necessary for the answer representation, which will be further analyzed in Section 4.3.",
"Another discovery is that jointly training the three models can provide great benefits, which shows that the three tasks are actually closely related and can boost each other with shared representations at bottom layers.",
"At last, comparing our method with the baseline, we achieve an improvement of nearly 3 points without the yes/no classification.",
"This significant improvement proves the effectiveness of our approach.",
"Case Study To demonstrate how each module of our model takes effect when predicting the final answer, we conduct a case study in Table 6 with the same example that we discussed in Section 1.",
"For each answer candidate, we list three scores predicted by the boundary model, content model and verification model respectively.",
"On the one hand, we can see that these three scores generally have some relevance.",
"For example, the second candidate is given lowest scores by all the three models.",
"We analyze that this is because the models share the same encoding and matching layers at bottom level and this relevance guarantees that the content and verification models will not violate the boundary model too much.",
"On the other hand, we also see that the verification score can really make a difference here when the boundary model makes an incorrect decision among the confusing answer candidates ([1], [3], [4], [6] ).",
"Besides, as we expected, the verification model tends to give higher scores for those answers that have semantic commonality with each other ([3] , [4], [6] ), which are all valid answers in this case.",
"By multiplying the three scores, our model finally predicts the answer correctly.",
"Necessity of the Content Model In our model, we compute the answer representation based on the content probabilities predicted by a separate content model instead of directly using the boundary probabilities.",
"We argue that this content model is necessary for our answer verification process.",
"Figure 2 plots the predicted content probabilities as well as the boundary probabilities The noun charge unit has 1 sense : 1 .",
"a measure of the quantity of electricity -LRB-determined by the amount of an electric current and the time for which it flows -RRB-.",
"familiarity info : charge unit used as a noun is very rare .",
"start probability end probability content probability Figure 2 : The boundary probabilities and content probabilities for the words in a passage for a passage.",
"We can see that the boundary and content probabilities capture different aspects of the answer.",
"Since answer candidates usually have similar boundary words, if we compute the answer representation based on the boundary probabilities, it's difficult to model the real difference among different answer candidates.",
"On the contrary, with the content probabilities, we pay more attention to the content part of the answer, which can provide more distinguishable information for verifying the correct answer.",
"Furthermore, the content probabilities can also adjust the weights of the words within the answer span so that unimportant words (e.g.",
"\"and\" and \".\")",
"get lower weights in the final answer representation.",
"We believe that this refined representation is also good for the answer verification process.",
"Related Work Machine reading comprehension made rapid progress in recent years, especially for singlepassage MRC task, such as SQuAD (Rajpurkar et al., 2016) .",
"Mainstream studies (Seo et al., 2016; Wang and Jiang, 2016; Xiong et al., 2016) treat reading comprehension as extracting answer span from the given passage, which is usually achieved by predicting the start and end position of the answer.",
"We implement our boundary model similarly by employing the boundary-based pointer network (Wang and Jiang, 2016) .",
"Another inspiring work is from Wang et al.",
"(2017c) , where the authors propose to match the passage against itself so that the representation can aggregate evidence from the whole passage.",
"Our verification model adopts a similar idea.",
"However, we collect information across passages and our attention is based on the answer representation, which is much more efficient than attention over all passages.",
"For the model training, Xiong et al.",
"(2017) argues that the boundary loss encourages exact answers at the cost of penalizing overlapping answers.",
"Therefore they propose a mixed objective that incorporates rewards derived from word overlap.",
"Our joint training approach has a similar function.",
"By taking the content and verification loss into consideration, our model will give less loss for overlapping answers than those unmatched answers, and our loss function is totally differentiable.",
"Recently, we also see emerging interests in multi-passage MRC from both the academic (Dunn et al., 2017; Joshi et al., 2017) and industrial community (Nguyen et al., 2016; He et al., 2017) .",
"Early studies (Shen et al., 2017; Wang et al., 2017c) usually concat those passages and employ the same models designed for singlepassage MRC.",
"However, more and more latest studies start to design specific methods that can read multiple passages more effectively.",
"In the aspect of passage selection, Wang et al.",
"(2017a) introduced a pipelined approach that rank the passages first and then read the selected passages for answering questions.",
"Tan et al.",
"(2017) treats the passage ranking as an auxiliary task that can be trained jointly with the reading comprehension model.",
"Actually, the target of our answer verification is very similar to that of the passage selection, while we pay more attention to the answer content and the answer verification process.",
"Speaking of the answer verification, Wang et al.",
"(2017b) has a similar motivation to ours.",
"They attempt to aggregate the evidence from different passages and choose the final answer from n-best candidates.",
"However, they implement their idea as a separate reranking step after reading comprehension, while our answer verification is a component of the whole model that can be trained end-to-end.",
"Conclusion In this paper, we propose an end-to-end framework to tackle the multi-passage MRC task .",
"We creatively design three different modules in our model, which can find the answer boundary, model the answer content and conduct cross-passage answer verification respectively.",
"All these three modules can be trained with different forms of the answer labels and training them jointly can provide further improvement.",
"The experimental results demonstrate that our model outperforms the baseline models by a large margin and achieves the state-of-the-art performance on two challenging datasets, both of which are designed for MRC on real web data."
]
} | {
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.3",
"2.4",
"2.5",
"3",
"3.1",
"3.2",
"3.4",
"4.1",
"4.2",
"4.3",
"5",
"6"
],
"paper_header_content": [
"Introduction",
"Question and Passage Modeling",
"Answer Boundary Prediction",
"Answer Content Modeling",
"Cross-Passage Answer Verification",
"Joint Training and Prediction",
"Experiments",
"Datasets",
"Implementation Details",
"Results on DuReader",
"Ablation Study",
"Case Study",
"Necessity of the Content Model",
"Related Work",
"Conclusion"
]
} | GEM-SciDuet-train-28#paper-1035#slide-12 | Main Results | Tab 1. Performance on MS-MARCO test set | Tab 1. Performance on MS-MARCO test set | [] |
GEM-SciDuet-train-28#paper-1035#slide-14 | 1035 | Multi-Passage Machine Reading Comprehension with Cross-Passage Answer Verification | Machine reading comprehension (MRC) on real web data usually requires the machine to answer a question by analyzing multiple passages retrieved by search engine. Compared with MRC on a single passage, multi-passage MRC is more challenging, since we are likely to get multiple confusing answer candidates from different passages. To address this problem, we propose an end-to-end neural model that enables those answer candidates from different passages to verify each other based on their content representations. Specifically, we jointly train three modules that can predict the final answer based on three factors: the answer boundary, the answer content and the cross-passage answer verification. The experimental results show that our method outperforms the baseline by a large margin and achieves the state-of-the-art performance on the English MS-MARCO dataset and the Chinese DuReader dataset, both of which are designed for MRC in real-world settings. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231
],
"paper_content_text": [
"Introduction Machine reading comprehension (MRC), empowering computers with the ability to acquire knowledge and answer questions from textual data, is believed to be a crucial step in building a general intelligent agent (Chen et al., 2016) .",
"Recent years have seen rapid growth in the MRC community.",
"With the release of various datasets, the MRC task has evolved from the early cloze-style test (Hermann et al., 2015; Hill et al., 2015) to answer extraction from a single passage (Rajpurkar et al., 2016) and to the latest more complex question answering on web data (Nguyen et al., 2016; Dunn et al., 2017; He et al., 2017) .",
"Great efforts have also been made to develop models for these MRC tasks , especially for the answer extraction on single passage (Wang and Jiang, 2016; Seo et al., 2016; Pan et al., 2017) .",
"A significant milestone is that several MRC models have exceeded the performance of human annotators on the SQuAD dataset 1 (Rajpurkar et al., 2016 ).",
"However, this success on single Wikipedia passage is still not adequate, considering the ultimate goal of reading the whole web.",
"Therefore, several latest datasets (Nguyen et al., 2016; He et al., 2017; Dunn et al., 2017) attempt to design the MRC tasks in more realistic settings by involving search engines.",
"For each question, they use the search engine to retrieve multiple passages and the MRC models are required to read these passages in order to give the final answer.",
"One of the intrinsic challenges for such multipassage MRC is that since all the passages are question-related but usually independently written, it's probable that multiple confusing answer candidates (correct or incorrect) exist.",
"Table 1 shows an example from MS-MARCO.",
"We can see that all the answer candidates have semantic matching with the question while they are literally different and some of them are even incorrect.",
"As is shown by Jia and Liang (2017) , these confusing answer candidates could be quite difficult for MRC models to distinguish.",
"Therefore, special consideration is required for such multi-passage MRC problem.",
"In this paper, we propose to leverage the answer candidates from different passages to verify the final correct answer and rule out the noisy incorrect answers.",
"Our hypothesis is that the cor-Question: What is the difference between a mixed and pure culture?",
"Passages: [1] A culture is a society's total way of living and a society is a group that live in a defined territory and participate in common culture.",
"While the answer given is in essence true, societies originally form for the express purpose to enhance .",
".",
".",
"[2] .",
".",
".",
"There has been resurgence in the economic system known as capitalism during the past two decades.",
"4.",
"The mixed economy is a balance between socialism and capitalism.",
"As a result, some institutions are owned and maintained by .",
".",
".",
"[3] A pure culture is one in which only one kind of microbial species is found whereas in mixed culture two or more microbial species formed colonies.",
"Culture on the other hand, is the lifestyle that the people in the country .",
".",
".",
"[4] Best Answer: A pure culture comprises a single species or strains.",
"A mixed culture is taken from a source and may contain multiple strains or species.",
"A contaminated culture contains organisms that derived from some place .",
".",
".",
"[5] .",
".",
".",
"It will be at that time when we can truly obtain a pure culture.",
"A pure culture is a culture consisting of only one strain.",
"You can obtain a pure culture by picking out a small portion of the mixed culture .",
".",
".",
"[6] A pure culture is one in which only one kind of microbial species is found whereas in mixed culture two or more microbial species formed colonies.",
"A pure culture is a culture consisting of only one strain.",
".",
".",
".",
"· · · · · · Reference Answer: A pure culture is one in which only one kind of microbial species is found whereas in mixed culture two or more microbial species formed colonies.",
"rect answers could occur more frequently in those passages and usually share some commonalities, while incorrect answers are usually different from one another.",
"The example in Table 1 demonstrates this phenomenon.",
"We can see that the answer candidates extracted from the last four passages are all valid answers to the question and they are semantically similar to each other, while the answer candidates from the other two passages are incorrect and there is no supportive information from other passages.",
"As human beings usually compare the answer candidates from different sources to deduce the final answer, we hope that MRC model can also benefit from the cross-passage answer verification process.",
"The overall framework of our model is demonstrated in Figure 1 , which consists of three modules.",
"First, we follow the boundary-based MRC models (Seo et al., 2016; Wang and Jiang, 2016) to find an answer candidate for each passage by identifying the start and end position of the answer ( Figure 2) .",
"Second, we model the meanings of the answer candidates extracted from those passages and use the content scores to measure the quality of the candidates from a second perspective.",
"Third, we conduct the answer verification by enabling each answer candidate to attend to the other candidates based on their representations.",
"We hope that the answer candidates can collect supportive information from each other according to their semantic similarities and further decide whether each candidate is correct or not.",
"Therefore, the final answer is determined by three factors: the boundary, the content and the crosspassage answer verification.",
"The three steps are modeled using different modules, which can be jointly trained in our end-to-end framework.",
"We conduct extensive experiments on the MS-MARCO (Nguyen et al., 2016) and DuReader (He et al., 2017) datasets.",
"The results show that our answer verification MRC model outperforms the baseline models by a large margin and achieves the state-of-the-art performance on both datasets.",
"Figure 1 gives an overview of our multi-passage MRC model which is mainly composed of three modules including answer boundary prediction, answer content modeling and answer verification.",
"First of all, we need to model the question and passages.",
"Following Seo et al.",
"(2016) , we compute the question-aware representation for each passage (Section 2.1).",
"Based on this representation, we employ a Pointer Network (Vinyals et al., 2015) to predict the start and end position of the answer in the module of answer boundary prediction (Section 2.2).",
"At the same time, with the answer content model (Section 2.3), we estimate whether each word should be included in the answer and thus obtain the answer representations.",
"Next, in the answer verification module (Section 2.4), each answer candidate can attend to the other answer candidates to collect supportive information and we compute one score for each candidate Figure 1 : Overview of our method for multi-passage machine reading comprehension to indicate whether it is correct or not according to the verification.",
"The final answer is determined by not only the boundary but also the answer content and its verification score (Section 2.5).",
"Our Approach Question and Passage Modeling Given a question Q and a set of passages {P i } retrieved by search engines, our task is to find the best concise answer to the question.",
"First, we formally present the details of modeling the question and passages.",
"Encoding We first map each word into the vector space by concatenating its word embedding and sum of its character embeddings.",
"Then we employ bi-directional LSTMs (BiLSTM) to encode the question Q and passages {P i } as follows: u Q t = BiLSTM Q (u Q t−1 , [e Q Q-P Matching One essential step in MRC is to match the question with passages so that important information can be highlighted.",
"We use the Attention Flow Layer (Seo et al., 2016) to conduct the Q-P matching in two directions.",
"The similarity matrix S ∈ R |Q|×|P i | between the question and passage i is changed to a simpler version, where the similarity between the t th word in the question and the k th word in passage i is computed as: S t,k = u Q t · u P i k (3) Then the context-to-question attention and question-to-context attention is applied strictly following Seo et al.",
"(2016) to obtain the questionaware passage representation {ũ P i t }.",
"We do not give the details here due to space limitation.",
"Next, another BiLSTM is applied in order to fuse the contextual information and get the new representation for each word in the passage, which is regarded as the match output: v P i t = BiLSTM M (v P i t−1 ,ũ P i t ) (4) Based on the passage representations, we introduce the three main modules of our model.",
"Answer Boundary Prediction To extract the answer span from passages, mainstream studies try to locate the boundary of the answer, which is called boundary model.",
"Following (Wang and Jiang, 2016) , we employ Pointer Network (Vinyals et al., 2015) to compute the probability of each word to be the start or end position of the span: g t k = w a 1 tanh(W a 2 [v P k , h a t−1 ]) (5) α t k = exp(g t k )/ |P| j=1 exp(g t j ) (6) c t = |P| k=1 α t k v P k (7) h a t = LSTM(h a t−1 , c t ) (8) By utilizing the attention weights, the probability of the k th word in the passage to be the start and end position of the answer is obtained as α 1 k and α 2 k .",
"It should be noted that the pointer network is applied to the concatenation of all passages, which is denoted as P so that the probabilities are comparable across passages.",
"This boundary model can be trained by minimizing the negative log probabilities of the true start and end indices: L boundary = − 1 N N i=1 (log α 1 y 1 i + log α 2 y 2 i ) (9) where N is the number of samples in the dataset and y 1 i , y 2 i are the gold start and end positions.",
"Answer Content Modeling Previous work employs the boundary model to find the text span with the maximum boundary score as the final answer.",
"However, in our context, besides locating the answer candidates, we also need to model their meanings in order to conduct the verification.",
"An intuitive method is to compute the representation of the answer candidates separately after extracting them, but it could be hard to train such model end-to-end.",
"Here, we propose a novel method that can obtain the representation of the answer candidates based on probabilities.",
"Specifically, we change the output layer of the classic MRC model.",
"Besides predicting the boundary probabilities for the words in the passages, we also predict whether each word should be included in the content of the answer.",
"The content probability of the k th word is computed as: p c k = sigmoid(w c 1 ReLU(W c 2 v P i k )) (10) Training this content model is also quite intuitive.",
"We transform the boundary labels into a continuous segment, which means the words within the answer span will be labeled as 1 and other words will be labeled as 0.",
"In this way, we define the loss function as the averaged cross entropy: L content = − 1 N 1 |P| N i=1 |P | j=1 [y c k log p c k + (1 − y c k ) log(1 − p c k )] (11) The content probabilities provide another view to measure the quality of the answer in addition to the boundary.",
"Moreover, with these probabilities, we can represent the answer from passage i as a weighted sum of all the word embeddings in this passage: r A i = 1 |P i | |P i | k=1 p c k [e P i k , c P i k ] (12) Cross-Passage Answer Verification The boundary model and the content model focus on extracting and modeling the answer within a single passage respectively, with little consideration of the cross-passage information.",
"However, as is discussed in Section 1, there could be multiple answer candidates from different passages and some of them may mislead the MRC model to make an incorrect prediction.",
"It's necessary to aggregate the information from different passages and choose the best one from those candidates.",
"Therefore, we propose a method to enable the answer candidates to exchange information and verify each other through the cross-passage answer verification process.",
"Given the representation of the answer candidates from all passages {r A i }, each answer candidate then attends to other candidates to collect supportive information via attention mechanism: s i,j = 0, if i = j, r A i · r A j , otherwise (13) α i,j = exp(s i,j )/ n k=1 exp(s i,k ) (14) r A i = n j=1 α i,j r A j (15) Herer A i is the collected verification information from other passages based on the attention weights.",
"Then we pass it together with the original representation r A i to a fully connected layer: g v i = w v [r A i ,r A i , r A i r A i ] (16) We further normalize these scores over all passages to get the verification score for answer candidate A i : p v i = exp(g v i )/ n j=1 exp(g v j ) (17) In order to train this verification model, we take the answer from the gold passage as the gold answer.",
"And the loss function can be formulated as the negative log probability of the correct answer: L verif y = − 1 N N i=1 log p v y v i (18) where y v i is the index of the correct answer in all the answer candidates of the i th instance .",
"Joint Training and Prediction As is described above, we define three objectives for the reading comprehension model over multiple passages: 1. finding the boundary of the answer; 2. predicting whether each word should be included in the content; 3. selecting the best answer via cross-passage answer verification.",
"According to our design, these three tasks can share the same embedding, encoding and matching layers.",
"Therefore, we propose to train them together as multi-task learning (Ruder, 2017) .",
"The joint objective function is formulated as follows: L = L boundary + β 1 L content + β 2 L verif y (19) where β 1 and β 2 are two hyper-parameters that control the weights of those tasks.",
"When predicting the final answer, we take the boundary score, content score and verification score into consideration.",
"We first extract the answer candidate A i that has the maximum boundary score from each passage i.",
"This boundary score is computed as the product of the start and end probability of the answer span.",
"Then for each answer candidate A i , we average the content probabilities of all its words as the content score of A i .",
"And we can also predict the verification score for A i using the verification model.",
"Therefore, the final answer can be selected from all the answer candidates according to the product of these three scores.",
"Experiments To verify the effectiveness of our model on multipassage machine reading comprehension, we conduct experiments on the MS-MARCO (Nguyen et al., 2016) and DuReader (He et al., 2017) datasets.",
"Our method achieves the state-of-the-art performance on both datasets.",
"Datasets We choose the MS-MARCO and DuReader datasets to test our method, since both of them are One prerequisite for answer verification is that there should be multiple correct answers so that they can verify each other.",
"Both the MS-MARCO and DuReader datasets require the human annotators to generate multiple answers if possible.",
"Table 2 shows the proportion of questions that have multiple answers.",
"However, the same answer that occurs many times is treated as one single answer here.",
"Therefore, we also report the proportion of questions that have multiple answer spans to match with the human-generated answers.",
"A span is taken as valid if it can achieve F1 score larger than 0.7 compared with any reference answer.",
"From these statistics, we can see that the phenomenon of multiple answers is quite common for both MS-MARCO and DuReader.",
"These answers will provide strong signals for answer verification if we can leverage them properly.",
"Implementation Details For MS-MARCO, we preprocess the corpus with the reversible tokenizer from Stanford CoreNLP and we choose the span that achieves the highest ROUGE-L score with the reference answers as the gold span for training.",
"We employ the 300-D pre-trained Glove embeddings (Pennington et al., 2014) and keep it fixed during training.",
"The character embeddings are randomly initialized with its dimension as 30.",
"For DuReader, we follow the preprocessing described in He et al.",
"(2017) .",
"We tune the hyper-parameters according to the Model ROUGE-L BLEU-1 FastQA Ext (Weissenborn et al., 2017) 33.67 33.93 Prediction (Wang and Jiang, 2016) 37.33 40.72 ReasoNet (Shen et al., 2017) 38.81 39.86 R-Net (Wang et al., 2017c) 42.89 42.22 S-Net (Tan et al., 2017) 45 Two simple yet effective technologies are employed to improve the final performance on these two datasets respectively.",
"For MS-MARCO, approximately 8% questions have the answers as Yes or No, which usually cannot be solved by extractive approach (Tan et al., 2017) .",
"We address this problem by training a simple Yes/No classifier for those questions with certain patterns (e.g., starting with \"is\").",
"Concretely, we simply change the output layer of the basic boundary model so that it can predict whether the answer is \"Yes\" or \"No\".",
"For DuReader, the retrieved document usually contains a large number of paragraphs that cannot be fed into MRC models directly (He et al., 2017) .",
"The original paper employs a simple a simple heuristic strategy to select a representative paragraph for each document, while we train a paragraph ranking model for this.",
"We will demonstrate the effects of these two technologies later.",
"Table 3 shows the results of our system and other state-of-the-art models on the MS-MARCO test set.",
"We adopt the official evaluation metrics, including ROUGE-L (Lin, 2004) and BLEU-1 (Papineni et al., 2002) .",
"As we can see, for both metrics, our single model outperforms all the other competing models with an evident margin, which is extremely hard considering the near-human per- formance.",
"If we ensemble the models trained with different random seeds and hyper-parameters, the results can be further improved and outperform the ensemble model in Tan et al.",
"(2017) , especially in terms of the BLEU-1.",
"Results on MS-MARCO Results on DuReader The results of our model and several baseline systems on the test set of DuReader are shown in Table 4 .",
"The BiDAF and Match-LSTM models are provided as two baseline systems (He et al., 2017) .",
"Based on BiDAF, as is described in Section 3.2, we tried a new paragraph selection strategy by employing a paragraph ranking (PR) model.",
"We can see that this paragraph ranking can boost the BiDAF baseline significantly.",
"Finally, we implement our system based on this new strategy, and our system (single model) achieves further improvement by a large margin.",
"Question: What is the difference between a mixed and pure culture Scores Answer Candidates: Boundary Content Verification [1] A culture is a society's total way of living and a society is a group .",
".",
".",
"1.0 × 10 −2 1.0 × 10 −1 1.1 × 10 −1 [2] The mixed economy is a balance between socialism and capitalism.",
"1.0 × 10 −4 4.0 × 10 −2 3.2 × 10 −2 [3] A pure culture is one in which only one kind of microbial species is .",
".",
".",
"5.5 × 10 −3 7.7 × 10 −2 1.2 × 10 −1 [4] A pure culture comprises a single species or strains.",
"A mixed .",
".",
".",
"2.7 × 10 −3 8.1 × 10 −2 1.3 × 10 −1 [5] A pure culture is a culture consisting of only one strain.",
"5.8 × 10 −4 7.9 × 10 −2 5.1 × 10 −2 [6] A pure culture is one in which only one kind of microbial species .",
".",
".",
"5.8 × 10 −3 9.1 × 10 −2 2.7 × 10 −1 .",
".",
".",
".",
".",
".",
".",
".",
".",
".",
".",
".",
"Analysis and Discussion Ablation Study To get better insight into our system, we conduct in-depth ablation study on the development set of MS-MARCO, which is shown in Table 5 .",
"Following Tan et al.",
"(2017) , we mainly focus on the ROUGE-L score that is averaged case by case.",
"We first evaluate the answer verification by ablating the cross-passage verification model so that the verification loss and verification score will not be used during training and testing.",
"Then we remove the content model in order to test the necessity of modeling the content of the answer.",
"Since we don't have the content scores, we use the boundary probabilities instead to compute the answer representation for verification.",
"Next, to show the benefits of joint training, we train the boundary model separately from the other two models.",
"Finally, we remove the yes/no classification in order to show the real improvement of our end-toend model compared with the baseline method that predicts the answer with only the boundary model.",
"From Table 5 , we can see that the answer verification makes a great contribution to the overall improvement, which confirms our hypothesis that cross-passage answer verification is useful for the multi-passage MRC.",
"For the ablation of the content model, we analyze that it will not only affect the content score itself, but also violate the verification model since the content probabilities are necessary for the answer representation, which will be further analyzed in Section 4.3.",
"Another discovery is that jointly training the three models can provide great benefits, which shows that the three tasks are actually closely related and can boost each other with shared representations at bottom layers.",
"At last, comparing our method with the baseline, we achieve an improvement of nearly 3 points without the yes/no classification.",
"This significant improvement proves the effectiveness of our approach.",
"Case Study To demonstrate how each module of our model takes effect when predicting the final answer, we conduct a case study in Table 6 with the same example that we discussed in Section 1.",
"For each answer candidate, we list three scores predicted by the boundary model, content model and verification model respectively.",
"On the one hand, we can see that these three scores generally have some relevance.",
"For example, the second candidate is given lowest scores by all the three models.",
"We analyze that this is because the models share the same encoding and matching layers at bottom level and this relevance guarantees that the content and verification models will not violate the boundary model too much.",
"On the other hand, we also see that the verification score can really make a difference here when the boundary model makes an incorrect decision among the confusing answer candidates ([1], [3], [4], [6] ).",
"Besides, as we expected, the verification model tends to give higher scores for those answers that have semantic commonality with each other ([3] , [4], [6] ), which are all valid answers in this case.",
"By multiplying the three scores, our model finally predicts the answer correctly.",
"Necessity of the Content Model In our model, we compute the answer representation based on the content probabilities predicted by a separate content model instead of directly using the boundary probabilities.",
"We argue that this content model is necessary for our answer verification process.",
"Figure 2 plots the predicted content probabilities as well as the boundary probabilities The noun charge unit has 1 sense : 1 .",
"a measure of the quantity of electricity -LRB-determined by the amount of an electric current and the time for which it flows -RRB-.",
"familiarity info : charge unit used as a noun is very rare .",
"start probability end probability content probability Figure 2 : The boundary probabilities and content probabilities for the words in a passage for a passage.",
"We can see that the boundary and content probabilities capture different aspects of the answer.",
"Since answer candidates usually have similar boundary words, if we compute the answer representation based on the boundary probabilities, it's difficult to model the real difference among different answer candidates.",
"On the contrary, with the content probabilities, we pay more attention to the content part of the answer, which can provide more distinguishable information for verifying the correct answer.",
"Furthermore, the content probabilities can also adjust the weights of the words within the answer span so that unimportant words (e.g.",
"\"and\" and \".\")",
"get lower weights in the final answer representation.",
"We believe that this refined representation is also good for the answer verification process.",
"Related Work Machine reading comprehension made rapid progress in recent years, especially for singlepassage MRC task, such as SQuAD (Rajpurkar et al., 2016) .",
"Mainstream studies (Seo et al., 2016; Wang and Jiang, 2016; Xiong et al., 2016) treat reading comprehension as extracting answer span from the given passage, which is usually achieved by predicting the start and end position of the answer.",
"We implement our boundary model similarly by employing the boundary-based pointer network (Wang and Jiang, 2016) .",
"Another inspiring work is from Wang et al.",
"(2017c) , where the authors propose to match the passage against itself so that the representation can aggregate evidence from the whole passage.",
"Our verification model adopts a similar idea.",
"However, we collect information across passages and our attention is based on the answer representation, which is much more efficient than attention over all passages.",
"For the model training, Xiong et al.",
"(2017) argues that the boundary loss encourages exact answers at the cost of penalizing overlapping answers.",
"Therefore they propose a mixed objective that incorporates rewards derived from word overlap.",
"Our joint training approach has a similar function.",
"By taking the content and verification loss into consideration, our model will give less loss for overlapping answers than those unmatched answers, and our loss function is totally differentiable.",
"Recently, we also see emerging interests in multi-passage MRC from both the academic (Dunn et al., 2017; Joshi et al., 2017) and industrial community (Nguyen et al., 2016; He et al., 2017) .",
"Early studies (Shen et al., 2017; Wang et al., 2017c) usually concat those passages and employ the same models designed for singlepassage MRC.",
"However, more and more latest studies start to design specific methods that can read multiple passages more effectively.",
"In the aspect of passage selection, Wang et al.",
"(2017a) introduced a pipelined approach that rank the passages first and then read the selected passages for answering questions.",
"Tan et al.",
"(2017) treats the passage ranking as an auxiliary task that can be trained jointly with the reading comprehension model.",
"Actually, the target of our answer verification is very similar to that of the passage selection, while we pay more attention to the answer content and the answer verification process.",
"Speaking of the answer verification, Wang et al.",
"(2017b) has a similar motivation to ours.",
"They attempt to aggregate the evidence from different passages and choose the final answer from n-best candidates.",
"However, they implement their idea as a separate reranking step after reading comprehension, while our answer verification is a component of the whole model that can be trained end-to-end.",
"Conclusion In this paper, we propose an end-to-end framework to tackle the multi-passage MRC task .",
"We creatively design three different modules in our model, which can find the answer boundary, model the answer content and conduct cross-passage answer verification respectively.",
"All these three modules can be trained with different forms of the answer labels and training them jointly can provide further improvement.",
"The experimental results demonstrate that our model outperforms the baseline models by a large margin and achieves the state-of-the-art performance on two challenging datasets, both of which are designed for MRC on real web data."
]
} | {
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.3",
"2.4",
"2.5",
"3",
"3.1",
"3.2",
"3.4",
"4.1",
"4.2",
"4.3",
"5",
"6"
],
"paper_header_content": [
"Introduction",
"Question and Passage Modeling",
"Answer Boundary Prediction",
"Answer Content Modeling",
"Cross-Passage Answer Verification",
"Joint Training and Prediction",
"Experiments",
"Datasets",
"Implementation Details",
"Results on DuReader",
"Ablation Study",
"Case Study",
"Necessity of the Content Model",
"Related Work",
"Conclusion"
]
} | GEM-SciDuet-train-28#paper-1035#slide-14 | Quantitative Analysis the Predicted Scores | Question: What is the difference between a mixed and pure culture
[1] A culture is a societys total way of living and a society is a group...
[2] The mixed economy is a balance between socialism and capitalism.
[6] A pure culture is one in which only one kind of microbial species... | 5.8 x 107
[4] A pure culture comprises a single species or strains. A mixed ... 2.7 x 1073
[5] A pure culture is a culture consisting of only one strain. 5.8 x 1074
Boundary / content / verification scores are usually positively relevant
Answer Candidates: Boundary Content Verification | Question: What is the difference between a mixed and pure culture
[1] A culture is a societys total way of living and a society is a group...
[2] The mixed economy is a balance between socialism and capitalism.
[6] A pure culture is one in which only one kind of microbial species... | 5.8 x 107
[4] A pure culture comprises a single species or strains. A mixed ... 2.7 x 1073
[5] A pure culture is a culture consisting of only one strain. 5.8 x 1074
Boundary / content / verification scores are usually positively relevant
Answer Candidates: Boundary Content Verification | [] |
GEM-SciDuet-train-28#paper-1035#slide-15 | 1035 | Multi-Passage Machine Reading Comprehension with Cross-Passage Answer Verification | Machine reading comprehension (MRC) on real web data usually requires the machine to answer a question by analyzing multiple passages retrieved by search engine. Compared with MRC on a single passage, multi-passage MRC is more challenging, since we are likely to get multiple confusing answer candidates from different passages. To address this problem, we propose an end-to-end neural model that enables those answer candidates from different passages to verify each other based on their content representations. Specifically, we jointly train three modules that can predict the final answer based on three factors: the answer boundary, the answer content and the cross-passage answer verification. The experimental results show that our method outperforms the baseline by a large margin and achieves the state-of-the-art performance on the English MS-MARCO dataset and the Chinese DuReader dataset, both of which are designed for MRC in real-world settings. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231
],
"paper_content_text": [
"Introduction Machine reading comprehension (MRC), empowering computers with the ability to acquire knowledge and answer questions from textual data, is believed to be a crucial step in building a general intelligent agent (Chen et al., 2016) .",
"Recent years have seen rapid growth in the MRC community.",
"With the release of various datasets, the MRC task has evolved from the early cloze-style test (Hermann et al., 2015; Hill et al., 2015) to answer extraction from a single passage (Rajpurkar et al., 2016) and to the latest more complex question answering on web data (Nguyen et al., 2016; Dunn et al., 2017; He et al., 2017) .",
"Great efforts have also been made to develop models for these MRC tasks , especially for the answer extraction on single passage (Wang and Jiang, 2016; Seo et al., 2016; Pan et al., 2017) .",
"A significant milestone is that several MRC models have exceeded the performance of human annotators on the SQuAD dataset 1 (Rajpurkar et al., 2016 ).",
"However, this success on single Wikipedia passage is still not adequate, considering the ultimate goal of reading the whole web.",
"Therefore, several latest datasets (Nguyen et al., 2016; He et al., 2017; Dunn et al., 2017) attempt to design the MRC tasks in more realistic settings by involving search engines.",
"For each question, they use the search engine to retrieve multiple passages and the MRC models are required to read these passages in order to give the final answer.",
"One of the intrinsic challenges for such multipassage MRC is that since all the passages are question-related but usually independently written, it's probable that multiple confusing answer candidates (correct or incorrect) exist.",
"Table 1 shows an example from MS-MARCO.",
"We can see that all the answer candidates have semantic matching with the question while they are literally different and some of them are even incorrect.",
"As is shown by Jia and Liang (2017) , these confusing answer candidates could be quite difficult for MRC models to distinguish.",
"Therefore, special consideration is required for such multi-passage MRC problem.",
"In this paper, we propose to leverage the answer candidates from different passages to verify the final correct answer and rule out the noisy incorrect answers.",
"Our hypothesis is that the cor-Question: What is the difference between a mixed and pure culture?",
"Passages: [1] A culture is a society's total way of living and a society is a group that live in a defined territory and participate in common culture.",
"While the answer given is in essence true, societies originally form for the express purpose to enhance .",
".",
".",
"[2] .",
".",
".",
"There has been resurgence in the economic system known as capitalism during the past two decades.",
"4.",
"The mixed economy is a balance between socialism and capitalism.",
"As a result, some institutions are owned and maintained by .",
".",
".",
"[3] A pure culture is one in which only one kind of microbial species is found whereas in mixed culture two or more microbial species formed colonies.",
"Culture on the other hand, is the lifestyle that the people in the country .",
".",
".",
"[4] Best Answer: A pure culture comprises a single species or strains.",
"A mixed culture is taken from a source and may contain multiple strains or species.",
"A contaminated culture contains organisms that derived from some place .",
".",
".",
"[5] .",
".",
".",
"It will be at that time when we can truly obtain a pure culture.",
"A pure culture is a culture consisting of only one strain.",
"You can obtain a pure culture by picking out a small portion of the mixed culture .",
".",
".",
"[6] A pure culture is one in which only one kind of microbial species is found whereas in mixed culture two or more microbial species formed colonies.",
"A pure culture is a culture consisting of only one strain.",
".",
".",
".",
"· · · · · · Reference Answer: A pure culture is one in which only one kind of microbial species is found whereas in mixed culture two or more microbial species formed colonies.",
"rect answers could occur more frequently in those passages and usually share some commonalities, while incorrect answers are usually different from one another.",
"The example in Table 1 demonstrates this phenomenon.",
"We can see that the answer candidates extracted from the last four passages are all valid answers to the question and they are semantically similar to each other, while the answer candidates from the other two passages are incorrect and there is no supportive information from other passages.",
"As human beings usually compare the answer candidates from different sources to deduce the final answer, we hope that MRC model can also benefit from the cross-passage answer verification process.",
"The overall framework of our model is demonstrated in Figure 1 , which consists of three modules.",
"First, we follow the boundary-based MRC models (Seo et al., 2016; Wang and Jiang, 2016) to find an answer candidate for each passage by identifying the start and end position of the answer ( Figure 2) .",
"Second, we model the meanings of the answer candidates extracted from those passages and use the content scores to measure the quality of the candidates from a second perspective.",
"Third, we conduct the answer verification by enabling each answer candidate to attend to the other candidates based on their representations.",
"We hope that the answer candidates can collect supportive information from each other according to their semantic similarities and further decide whether each candidate is correct or not.",
"Therefore, the final answer is determined by three factors: the boundary, the content and the crosspassage answer verification.",
"The three steps are modeled using different modules, which can be jointly trained in our end-to-end framework.",
"We conduct extensive experiments on the MS-MARCO (Nguyen et al., 2016) and DuReader (He et al., 2017) datasets.",
"The results show that our answer verification MRC model outperforms the baseline models by a large margin and achieves the state-of-the-art performance on both datasets.",
"Figure 1 gives an overview of our multi-passage MRC model which is mainly composed of three modules including answer boundary prediction, answer content modeling and answer verification.",
"First of all, we need to model the question and passages.",
"Following Seo et al.",
"(2016) , we compute the question-aware representation for each passage (Section 2.1).",
"Based on this representation, we employ a Pointer Network (Vinyals et al., 2015) to predict the start and end position of the answer in the module of answer boundary prediction (Section 2.2).",
"At the same time, with the answer content model (Section 2.3), we estimate whether each word should be included in the answer and thus obtain the answer representations.",
"Next, in the answer verification module (Section 2.4), each answer candidate can attend to the other answer candidates to collect supportive information and we compute one score for each candidate Figure 1 : Overview of our method for multi-passage machine reading comprehension to indicate whether it is correct or not according to the verification.",
"The final answer is determined by not only the boundary but also the answer content and its verification score (Section 2.5).",
"Our Approach Question and Passage Modeling Given a question Q and a set of passages {P i } retrieved by search engines, our task is to find the best concise answer to the question.",
"First, we formally present the details of modeling the question and passages.",
"Encoding We first map each word into the vector space by concatenating its word embedding and sum of its character embeddings.",
"Then we employ bi-directional LSTMs (BiLSTM) to encode the question Q and passages {P i } as follows: u Q t = BiLSTM Q (u Q t−1 , [e Q Q-P Matching One essential step in MRC is to match the question with passages so that important information can be highlighted.",
"We use the Attention Flow Layer (Seo et al., 2016) to conduct the Q-P matching in two directions.",
"The similarity matrix S ∈ R |Q|×|P i | between the question and passage i is changed to a simpler version, where the similarity between the t th word in the question and the k th word in passage i is computed as: S t,k = u Q t · u P i k (3) Then the context-to-question attention and question-to-context attention is applied strictly following Seo et al.",
"(2016) to obtain the questionaware passage representation {ũ P i t }.",
"We do not give the details here due to space limitation.",
"Next, another BiLSTM is applied in order to fuse the contextual information and get the new representation for each word in the passage, which is regarded as the match output: v P i t = BiLSTM M (v P i t−1 ,ũ P i t ) (4) Based on the passage representations, we introduce the three main modules of our model.",
"Answer Boundary Prediction To extract the answer span from passages, mainstream studies try to locate the boundary of the answer, which is called boundary model.",
"Following (Wang and Jiang, 2016) , we employ Pointer Network (Vinyals et al., 2015) to compute the probability of each word to be the start or end position of the span: g t k = w a 1 tanh(W a 2 [v P k , h a t−1 ]) (5) α t k = exp(g t k )/ |P| j=1 exp(g t j ) (6) c t = |P| k=1 α t k v P k (7) h a t = LSTM(h a t−1 , c t ) (8) By utilizing the attention weights, the probability of the k th word in the passage to be the start and end position of the answer is obtained as α 1 k and α 2 k .",
"It should be noted that the pointer network is applied to the concatenation of all passages, which is denoted as P so that the probabilities are comparable across passages.",
"This boundary model can be trained by minimizing the negative log probabilities of the true start and end indices: L boundary = − 1 N N i=1 (log α 1 y 1 i + log α 2 y 2 i ) (9) where N is the number of samples in the dataset and y 1 i , y 2 i are the gold start and end positions.",
"Answer Content Modeling Previous work employs the boundary model to find the text span with the maximum boundary score as the final answer.",
"However, in our context, besides locating the answer candidates, we also need to model their meanings in order to conduct the verification.",
"An intuitive method is to compute the representation of the answer candidates separately after extracting them, but it could be hard to train such model end-to-end.",
"Here, we propose a novel method that can obtain the representation of the answer candidates based on probabilities.",
"Specifically, we change the output layer of the classic MRC model.",
"Besides predicting the boundary probabilities for the words in the passages, we also predict whether each word should be included in the content of the answer.",
"The content probability of the k th word is computed as: p c k = sigmoid(w c 1 ReLU(W c 2 v P i k )) (10) Training this content model is also quite intuitive.",
"We transform the boundary labels into a continuous segment, which means the words within the answer span will be labeled as 1 and other words will be labeled as 0.",
"In this way, we define the loss function as the averaged cross entropy: L content = − 1 N 1 |P| N i=1 |P | j=1 [y c k log p c k + (1 − y c k ) log(1 − p c k )] (11) The content probabilities provide another view to measure the quality of the answer in addition to the boundary.",
"Moreover, with these probabilities, we can represent the answer from passage i as a weighted sum of all the word embeddings in this passage: r A i = 1 |P i | |P i | k=1 p c k [e P i k , c P i k ] (12) Cross-Passage Answer Verification The boundary model and the content model focus on extracting and modeling the answer within a single passage respectively, with little consideration of the cross-passage information.",
"However, as is discussed in Section 1, there could be multiple answer candidates from different passages and some of them may mislead the MRC model to make an incorrect prediction.",
"It's necessary to aggregate the information from different passages and choose the best one from those candidates.",
"Therefore, we propose a method to enable the answer candidates to exchange information and verify each other through the cross-passage answer verification process.",
"Given the representation of the answer candidates from all passages {r A i }, each answer candidate then attends to other candidates to collect supportive information via attention mechanism: s i,j = 0, if i = j, r A i · r A j , otherwise (13) α i,j = exp(s i,j )/ n k=1 exp(s i,k ) (14) r A i = n j=1 α i,j r A j (15) Herer A i is the collected verification information from other passages based on the attention weights.",
"Then we pass it together with the original representation r A i to a fully connected layer: g v i = w v [r A i ,r A i , r A i r A i ] (16) We further normalize these scores over all passages to get the verification score for answer candidate A i : p v i = exp(g v i )/ n j=1 exp(g v j ) (17) In order to train this verification model, we take the answer from the gold passage as the gold answer.",
"And the loss function can be formulated as the negative log probability of the correct answer: L verif y = − 1 N N i=1 log p v y v i (18) where y v i is the index of the correct answer in all the answer candidates of the i th instance .",
"Joint Training and Prediction As is described above, we define three objectives for the reading comprehension model over multiple passages: 1. finding the boundary of the answer; 2. predicting whether each word should be included in the content; 3. selecting the best answer via cross-passage answer verification.",
"According to our design, these three tasks can share the same embedding, encoding and matching layers.",
"Therefore, we propose to train them together as multi-task learning (Ruder, 2017) .",
"The joint objective function is formulated as follows: L = L boundary + β 1 L content + β 2 L verif y (19) where β 1 and β 2 are two hyper-parameters that control the weights of those tasks.",
"When predicting the final answer, we take the boundary score, content score and verification score into consideration.",
"We first extract the answer candidate A i that has the maximum boundary score from each passage i.",
"This boundary score is computed as the product of the start and end probability of the answer span.",
"Then for each answer candidate A i , we average the content probabilities of all its words as the content score of A i .",
"And we can also predict the verification score for A i using the verification model.",
"Therefore, the final answer can be selected from all the answer candidates according to the product of these three scores.",
"Experiments To verify the effectiveness of our model on multipassage machine reading comprehension, we conduct experiments on the MS-MARCO (Nguyen et al., 2016) and DuReader (He et al., 2017) datasets.",
"Our method achieves the state-of-the-art performance on both datasets.",
"Datasets We choose the MS-MARCO and DuReader datasets to test our method, since both of them are One prerequisite for answer verification is that there should be multiple correct answers so that they can verify each other.",
"Both the MS-MARCO and DuReader datasets require the human annotators to generate multiple answers if possible.",
"Table 2 shows the proportion of questions that have multiple answers.",
"However, the same answer that occurs many times is treated as one single answer here.",
"Therefore, we also report the proportion of questions that have multiple answer spans to match with the human-generated answers.",
"A span is taken as valid if it can achieve F1 score larger than 0.7 compared with any reference answer.",
"From these statistics, we can see that the phenomenon of multiple answers is quite common for both MS-MARCO and DuReader.",
"These answers will provide strong signals for answer verification if we can leverage them properly.",
"Implementation Details For MS-MARCO, we preprocess the corpus with the reversible tokenizer from Stanford CoreNLP and we choose the span that achieves the highest ROUGE-L score with the reference answers as the gold span for training.",
"We employ the 300-D pre-trained Glove embeddings (Pennington et al., 2014) and keep it fixed during training.",
"The character embeddings are randomly initialized with its dimension as 30.",
"For DuReader, we follow the preprocessing described in He et al.",
"(2017) .",
"We tune the hyper-parameters according to the Model ROUGE-L BLEU-1 FastQA Ext (Weissenborn et al., 2017) 33.67 33.93 Prediction (Wang and Jiang, 2016) 37.33 40.72 ReasoNet (Shen et al., 2017) 38.81 39.86 R-Net (Wang et al., 2017c) 42.89 42.22 S-Net (Tan et al., 2017) 45 Two simple yet effective technologies are employed to improve the final performance on these two datasets respectively.",
"For MS-MARCO, approximately 8% questions have the answers as Yes or No, which usually cannot be solved by extractive approach (Tan et al., 2017) .",
"We address this problem by training a simple Yes/No classifier for those questions with certain patterns (e.g., starting with \"is\").",
"Concretely, we simply change the output layer of the basic boundary model so that it can predict whether the answer is \"Yes\" or \"No\".",
"For DuReader, the retrieved document usually contains a large number of paragraphs that cannot be fed into MRC models directly (He et al., 2017) .",
"The original paper employs a simple a simple heuristic strategy to select a representative paragraph for each document, while we train a paragraph ranking model for this.",
"We will demonstrate the effects of these two technologies later.",
"Table 3 shows the results of our system and other state-of-the-art models on the MS-MARCO test set.",
"We adopt the official evaluation metrics, including ROUGE-L (Lin, 2004) and BLEU-1 (Papineni et al., 2002) .",
"As we can see, for both metrics, our single model outperforms all the other competing models with an evident margin, which is extremely hard considering the near-human per- formance.",
"If we ensemble the models trained with different random seeds and hyper-parameters, the results can be further improved and outperform the ensemble model in Tan et al.",
"(2017) , especially in terms of the BLEU-1.",
"Results on MS-MARCO Results on DuReader The results of our model and several baseline systems on the test set of DuReader are shown in Table 4 .",
"The BiDAF and Match-LSTM models are provided as two baseline systems (He et al., 2017) .",
"Based on BiDAF, as is described in Section 3.2, we tried a new paragraph selection strategy by employing a paragraph ranking (PR) model.",
"We can see that this paragraph ranking can boost the BiDAF baseline significantly.",
"Finally, we implement our system based on this new strategy, and our system (single model) achieves further improvement by a large margin.",
"Question: What is the difference between a mixed and pure culture Scores Answer Candidates: Boundary Content Verification [1] A culture is a society's total way of living and a society is a group .",
".",
".",
"1.0 × 10 −2 1.0 × 10 −1 1.1 × 10 −1 [2] The mixed economy is a balance between socialism and capitalism.",
"1.0 × 10 −4 4.0 × 10 −2 3.2 × 10 −2 [3] A pure culture is one in which only one kind of microbial species is .",
".",
".",
"5.5 × 10 −3 7.7 × 10 −2 1.2 × 10 −1 [4] A pure culture comprises a single species or strains.",
"A mixed .",
".",
".",
"2.7 × 10 −3 8.1 × 10 −2 1.3 × 10 −1 [5] A pure culture is a culture consisting of only one strain.",
"5.8 × 10 −4 7.9 × 10 −2 5.1 × 10 −2 [6] A pure culture is one in which only one kind of microbial species .",
".",
".",
"5.8 × 10 −3 9.1 × 10 −2 2.7 × 10 −1 .",
".",
".",
".",
".",
".",
".",
".",
".",
".",
".",
".",
"Analysis and Discussion Ablation Study To get better insight into our system, we conduct in-depth ablation study on the development set of MS-MARCO, which is shown in Table 5 .",
"Following Tan et al.",
"(2017) , we mainly focus on the ROUGE-L score that is averaged case by case.",
"We first evaluate the answer verification by ablating the cross-passage verification model so that the verification loss and verification score will not be used during training and testing.",
"Then we remove the content model in order to test the necessity of modeling the content of the answer.",
"Since we don't have the content scores, we use the boundary probabilities instead to compute the answer representation for verification.",
"Next, to show the benefits of joint training, we train the boundary model separately from the other two models.",
"Finally, we remove the yes/no classification in order to show the real improvement of our end-toend model compared with the baseline method that predicts the answer with only the boundary model.",
"From Table 5 , we can see that the answer verification makes a great contribution to the overall improvement, which confirms our hypothesis that cross-passage answer verification is useful for the multi-passage MRC.",
"For the ablation of the content model, we analyze that it will not only affect the content score itself, but also violate the verification model since the content probabilities are necessary for the answer representation, which will be further analyzed in Section 4.3.",
"Another discovery is that jointly training the three models can provide great benefits, which shows that the three tasks are actually closely related and can boost each other with shared representations at bottom layers.",
"At last, comparing our method with the baseline, we achieve an improvement of nearly 3 points without the yes/no classification.",
"This significant improvement proves the effectiveness of our approach.",
"Case Study To demonstrate how each module of our model takes effect when predicting the final answer, we conduct a case study in Table 6 with the same example that we discussed in Section 1.",
"For each answer candidate, we list three scores predicted by the boundary model, content model and verification model respectively.",
"On the one hand, we can see that these three scores generally have some relevance.",
"For example, the second candidate is given lowest scores by all the three models.",
"We analyze that this is because the models share the same encoding and matching layers at bottom level and this relevance guarantees that the content and verification models will not violate the boundary model too much.",
"On the other hand, we also see that the verification score can really make a difference here when the boundary model makes an incorrect decision among the confusing answer candidates ([1], [3], [4], [6] ).",
"Besides, as we expected, the verification model tends to give higher scores for those answers that have semantic commonality with each other ([3] , [4], [6] ), which are all valid answers in this case.",
"By multiplying the three scores, our model finally predicts the answer correctly.",
"Necessity of the Content Model In our model, we compute the answer representation based on the content probabilities predicted by a separate content model instead of directly using the boundary probabilities.",
"We argue that this content model is necessary for our answer verification process.",
"Figure 2 plots the predicted content probabilities as well as the boundary probabilities The noun charge unit has 1 sense : 1 .",
"a measure of the quantity of electricity -LRB-determined by the amount of an electric current and the time for which it flows -RRB-.",
"familiarity info : charge unit used as a noun is very rare .",
"start probability end probability content probability Figure 2 : The boundary probabilities and content probabilities for the words in a passage for a passage.",
"We can see that the boundary and content probabilities capture different aspects of the answer.",
"Since answer candidates usually have similar boundary words, if we compute the answer representation based on the boundary probabilities, it's difficult to model the real difference among different answer candidates.",
"On the contrary, with the content probabilities, we pay more attention to the content part of the answer, which can provide more distinguishable information for verifying the correct answer.",
"Furthermore, the content probabilities can also adjust the weights of the words within the answer span so that unimportant words (e.g.",
"\"and\" and \".\")",
"get lower weights in the final answer representation.",
"We believe that this refined representation is also good for the answer verification process.",
"Related Work Machine reading comprehension made rapid progress in recent years, especially for singlepassage MRC task, such as SQuAD (Rajpurkar et al., 2016) .",
"Mainstream studies (Seo et al., 2016; Wang and Jiang, 2016; Xiong et al., 2016) treat reading comprehension as extracting answer span from the given passage, which is usually achieved by predicting the start and end position of the answer.",
"We implement our boundary model similarly by employing the boundary-based pointer network (Wang and Jiang, 2016) .",
"Another inspiring work is from Wang et al.",
"(2017c) , where the authors propose to match the passage against itself so that the representation can aggregate evidence from the whole passage.",
"Our verification model adopts a similar idea.",
"However, we collect information across passages and our attention is based on the answer representation, which is much more efficient than attention over all passages.",
"For the model training, Xiong et al.",
"(2017) argues that the boundary loss encourages exact answers at the cost of penalizing overlapping answers.",
"Therefore they propose a mixed objective that incorporates rewards derived from word overlap.",
"Our joint training approach has a similar function.",
"By taking the content and verification loss into consideration, our model will give less loss for overlapping answers than those unmatched answers, and our loss function is totally differentiable.",
"Recently, we also see emerging interests in multi-passage MRC from both the academic (Dunn et al., 2017; Joshi et al., 2017) and industrial community (Nguyen et al., 2016; He et al., 2017) .",
"Early studies (Shen et al., 2017; Wang et al., 2017c) usually concat those passages and employ the same models designed for singlepassage MRC.",
"However, more and more latest studies start to design specific methods that can read multiple passages more effectively.",
"In the aspect of passage selection, Wang et al.",
"(2017a) introduced a pipelined approach that rank the passages first and then read the selected passages for answering questions.",
"Tan et al.",
"(2017) treats the passage ranking as an auxiliary task that can be trained jointly with the reading comprehension model.",
"Actually, the target of our answer verification is very similar to that of the passage selection, while we pay more attention to the answer content and the answer verification process.",
"Speaking of the answer verification, Wang et al.",
"(2017b) has a similar motivation to ours.",
"They attempt to aggregate the evidence from different passages and choose the final answer from n-best candidates.",
"However, they implement their idea as a separate reranking step after reading comprehension, while our answer verification is a component of the whole model that can be trained end-to-end.",
"Conclusion In this paper, we propose an end-to-end framework to tackle the multi-passage MRC task .",
"We creatively design three different modules in our model, which can find the answer boundary, model the answer content and conduct cross-passage answer verification respectively.",
"All these three modules can be trained with different forms of the answer labels and training them jointly can provide further improvement.",
"The experimental results demonstrate that our model outperforms the baseline models by a large margin and achieves the state-of-the-art performance on two challenging datasets, both of which are designed for MRC on real web data."
]
} | {
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.3",
"2.4",
"2.5",
"3",
"3.1",
"3.2",
"3.4",
"4.1",
"4.2",
"4.3",
"5",
"6"
],
"paper_header_content": [
"Introduction",
"Question and Passage Modeling",
"Answer Boundary Prediction",
"Answer Content Modeling",
"Cross-Passage Answer Verification",
"Joint Training and Prediction",
"Experiments",
"Datasets",
"Implementation Details",
"Results on DuReader",
"Ablation Study",
"Case Study",
"Necessity of the Content Model",
"Related Work",
"Conclusion"
]
} | GEM-SciDuet-train-28#paper-1035#slide-15 | Necessity of the Content Model | zB se pasn yun asivyo oyur
yory aa A0J ou au) pue
asuas sey yun asivyo unou oul ~e- unou -aaT- yun asivyo
end probability start probability
pasn yun asieyd oyur
IT sey yun asieyd unou oul ~e- unou -aaT- yun asieyd
start probability end probability content probability
re oe eee . oe 1 oe . Ed SEHE RZ ZECEATESESCHREERSZZS SEDe ts 4e8 sm BStset 5 a= gaa 42 eegarac Bs Z S 2 2a 2 al Fs 2 a HE z 35 = 5 i 3 5 oe 5 o 5 s 2 2 o 3
When the answer is long, boundary words carry little information.
a> SMOT yorya 40y aun aw pue
Content words reflect the real semantics of this answer. | zB se pasn yun asivyo oyur
yory aa A0J ou au) pue
asuas sey yun asivyo unou oul ~e- unou -aaT- yun asivyo
end probability start probability
pasn yun asieyd oyur
IT sey yun asieyd unou oul ~e- unou -aaT- yun asieyd
start probability end probability content probability
re oe eee . oe 1 oe . Ed SEHE RZ ZECEATESESCHREERSZZS SEDe ts 4e8 sm BStset 5 a= gaa 42 eegarac Bs Z S 2 2a 2 al Fs 2 a HE z 35 = 5 i 3 5 oe 5 o 5 s 2 2 o 3
When the answer is long, boundary words carry little information.
a> SMOT yorya 40y aun aw pue
Content words reflect the real semantics of this answer. | [] |
GEM-SciDuet-train-28#paper-1035#slide-16 | 1035 | Multi-Passage Machine Reading Comprehension with Cross-Passage Answer Verification | Machine reading comprehension (MRC) on real web data usually requires the machine to answer a question by analyzing multiple passages retrieved by search engine. Compared with MRC on a single passage, multi-passage MRC is more challenging, since we are likely to get multiple confusing answer candidates from different passages. To address this problem, we propose an end-to-end neural model that enables those answer candidates from different passages to verify each other based on their content representations. Specifically, we jointly train three modules that can predict the final answer based on three factors: the answer boundary, the answer content and the cross-passage answer verification. The experimental results show that our method outperforms the baseline by a large margin and achieves the state-of-the-art performance on the English MS-MARCO dataset and the Chinese DuReader dataset, both of which are designed for MRC in real-world settings. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231
],
"paper_content_text": [
"Introduction Machine reading comprehension (MRC), empowering computers with the ability to acquire knowledge and answer questions from textual data, is believed to be a crucial step in building a general intelligent agent (Chen et al., 2016) .",
"Recent years have seen rapid growth in the MRC community.",
"With the release of various datasets, the MRC task has evolved from the early cloze-style test (Hermann et al., 2015; Hill et al., 2015) to answer extraction from a single passage (Rajpurkar et al., 2016) and to the latest more complex question answering on web data (Nguyen et al., 2016; Dunn et al., 2017; He et al., 2017) .",
"Great efforts have also been made to develop models for these MRC tasks , especially for the answer extraction on single passage (Wang and Jiang, 2016; Seo et al., 2016; Pan et al., 2017) .",
"A significant milestone is that several MRC models have exceeded the performance of human annotators on the SQuAD dataset 1 (Rajpurkar et al., 2016 ).",
"However, this success on single Wikipedia passage is still not adequate, considering the ultimate goal of reading the whole web.",
"Therefore, several latest datasets (Nguyen et al., 2016; He et al., 2017; Dunn et al., 2017) attempt to design the MRC tasks in more realistic settings by involving search engines.",
"For each question, they use the search engine to retrieve multiple passages and the MRC models are required to read these passages in order to give the final answer.",
"One of the intrinsic challenges for such multipassage MRC is that since all the passages are question-related but usually independently written, it's probable that multiple confusing answer candidates (correct or incorrect) exist.",
"Table 1 shows an example from MS-MARCO.",
"We can see that all the answer candidates have semantic matching with the question while they are literally different and some of them are even incorrect.",
"As is shown by Jia and Liang (2017) , these confusing answer candidates could be quite difficult for MRC models to distinguish.",
"Therefore, special consideration is required for such multi-passage MRC problem.",
"In this paper, we propose to leverage the answer candidates from different passages to verify the final correct answer and rule out the noisy incorrect answers.",
"Our hypothesis is that the cor-Question: What is the difference between a mixed and pure culture?",
"Passages: [1] A culture is a society's total way of living and a society is a group that live in a defined territory and participate in common culture.",
"While the answer given is in essence true, societies originally form for the express purpose to enhance .",
".",
".",
"[2] .",
".",
".",
"There has been resurgence in the economic system known as capitalism during the past two decades.",
"4.",
"The mixed economy is a balance between socialism and capitalism.",
"As a result, some institutions are owned and maintained by .",
".",
".",
"[3] A pure culture is one in which only one kind of microbial species is found whereas in mixed culture two or more microbial species formed colonies.",
"Culture on the other hand, is the lifestyle that the people in the country .",
".",
".",
"[4] Best Answer: A pure culture comprises a single species or strains.",
"A mixed culture is taken from a source and may contain multiple strains or species.",
"A contaminated culture contains organisms that derived from some place .",
".",
".",
"[5] .",
".",
".",
"It will be at that time when we can truly obtain a pure culture.",
"A pure culture is a culture consisting of only one strain.",
"You can obtain a pure culture by picking out a small portion of the mixed culture .",
".",
".",
"[6] A pure culture is one in which only one kind of microbial species is found whereas in mixed culture two or more microbial species formed colonies.",
"A pure culture is a culture consisting of only one strain.",
".",
".",
".",
"· · · · · · Reference Answer: A pure culture is one in which only one kind of microbial species is found whereas in mixed culture two or more microbial species formed colonies.",
"rect answers could occur more frequently in those passages and usually share some commonalities, while incorrect answers are usually different from one another.",
"The example in Table 1 demonstrates this phenomenon.",
"We can see that the answer candidates extracted from the last four passages are all valid answers to the question and they are semantically similar to each other, while the answer candidates from the other two passages are incorrect and there is no supportive information from other passages.",
"As human beings usually compare the answer candidates from different sources to deduce the final answer, we hope that MRC model can also benefit from the cross-passage answer verification process.",
"The overall framework of our model is demonstrated in Figure 1 , which consists of three modules.",
"First, we follow the boundary-based MRC models (Seo et al., 2016; Wang and Jiang, 2016) to find an answer candidate for each passage by identifying the start and end position of the answer ( Figure 2) .",
"Second, we model the meanings of the answer candidates extracted from those passages and use the content scores to measure the quality of the candidates from a second perspective.",
"Third, we conduct the answer verification by enabling each answer candidate to attend to the other candidates based on their representations.",
"We hope that the answer candidates can collect supportive information from each other according to their semantic similarities and further decide whether each candidate is correct or not.",
"Therefore, the final answer is determined by three factors: the boundary, the content and the crosspassage answer verification.",
"The three steps are modeled using different modules, which can be jointly trained in our end-to-end framework.",
"We conduct extensive experiments on the MS-MARCO (Nguyen et al., 2016) and DuReader (He et al., 2017) datasets.",
"The results show that our answer verification MRC model outperforms the baseline models by a large margin and achieves the state-of-the-art performance on both datasets.",
"Figure 1 gives an overview of our multi-passage MRC model which is mainly composed of three modules including answer boundary prediction, answer content modeling and answer verification.",
"First of all, we need to model the question and passages.",
"Following Seo et al.",
"(2016) , we compute the question-aware representation for each passage (Section 2.1).",
"Based on this representation, we employ a Pointer Network (Vinyals et al., 2015) to predict the start and end position of the answer in the module of answer boundary prediction (Section 2.2).",
"At the same time, with the answer content model (Section 2.3), we estimate whether each word should be included in the answer and thus obtain the answer representations.",
"Next, in the answer verification module (Section 2.4), each answer candidate can attend to the other answer candidates to collect supportive information and we compute one score for each candidate Figure 1 : Overview of our method for multi-passage machine reading comprehension to indicate whether it is correct or not according to the verification.",
"The final answer is determined by not only the boundary but also the answer content and its verification score (Section 2.5).",
"Our Approach Question and Passage Modeling Given a question Q and a set of passages {P i } retrieved by search engines, our task is to find the best concise answer to the question.",
"First, we formally present the details of modeling the question and passages.",
"Encoding We first map each word into the vector space by concatenating its word embedding and sum of its character embeddings.",
"Then we employ bi-directional LSTMs (BiLSTM) to encode the question Q and passages {P i } as follows: u Q t = BiLSTM Q (u Q t−1 , [e Q Q-P Matching One essential step in MRC is to match the question with passages so that important information can be highlighted.",
"We use the Attention Flow Layer (Seo et al., 2016) to conduct the Q-P matching in two directions.",
"The similarity matrix S ∈ R |Q|×|P i | between the question and passage i is changed to a simpler version, where the similarity between the t th word in the question and the k th word in passage i is computed as: S t,k = u Q t · u P i k (3) Then the context-to-question attention and question-to-context attention is applied strictly following Seo et al.",
"(2016) to obtain the questionaware passage representation {ũ P i t }.",
"We do not give the details here due to space limitation.",
"Next, another BiLSTM is applied in order to fuse the contextual information and get the new representation for each word in the passage, which is regarded as the match output: v P i t = BiLSTM M (v P i t−1 ,ũ P i t ) (4) Based on the passage representations, we introduce the three main modules of our model.",
"Answer Boundary Prediction To extract the answer span from passages, mainstream studies try to locate the boundary of the answer, which is called boundary model.",
"Following (Wang and Jiang, 2016) , we employ Pointer Network (Vinyals et al., 2015) to compute the probability of each word to be the start or end position of the span: g t k = w a 1 tanh(W a 2 [v P k , h a t−1 ]) (5) α t k = exp(g t k )/ |P| j=1 exp(g t j ) (6) c t = |P| k=1 α t k v P k (7) h a t = LSTM(h a t−1 , c t ) (8) By utilizing the attention weights, the probability of the k th word in the passage to be the start and end position of the answer is obtained as α 1 k and α 2 k .",
"It should be noted that the pointer network is applied to the concatenation of all passages, which is denoted as P so that the probabilities are comparable across passages.",
"This boundary model can be trained by minimizing the negative log probabilities of the true start and end indices: L boundary = − 1 N N i=1 (log α 1 y 1 i + log α 2 y 2 i ) (9) where N is the number of samples in the dataset and y 1 i , y 2 i are the gold start and end positions.",
"Answer Content Modeling Previous work employs the boundary model to find the text span with the maximum boundary score as the final answer.",
"However, in our context, besides locating the answer candidates, we also need to model their meanings in order to conduct the verification.",
"An intuitive method is to compute the representation of the answer candidates separately after extracting them, but it could be hard to train such model end-to-end.",
"Here, we propose a novel method that can obtain the representation of the answer candidates based on probabilities.",
"Specifically, we change the output layer of the classic MRC model.",
"Besides predicting the boundary probabilities for the words in the passages, we also predict whether each word should be included in the content of the answer.",
"The content probability of the k th word is computed as: p c k = sigmoid(w c 1 ReLU(W c 2 v P i k )) (10) Training this content model is also quite intuitive.",
"We transform the boundary labels into a continuous segment, which means the words within the answer span will be labeled as 1 and other words will be labeled as 0.",
"In this way, we define the loss function as the averaged cross entropy: L content = − 1 N 1 |P| N i=1 |P | j=1 [y c k log p c k + (1 − y c k ) log(1 − p c k )] (11) The content probabilities provide another view to measure the quality of the answer in addition to the boundary.",
"Moreover, with these probabilities, we can represent the answer from passage i as a weighted sum of all the word embeddings in this passage: r A i = 1 |P i | |P i | k=1 p c k [e P i k , c P i k ] (12) Cross-Passage Answer Verification The boundary model and the content model focus on extracting and modeling the answer within a single passage respectively, with little consideration of the cross-passage information.",
"However, as is discussed in Section 1, there could be multiple answer candidates from different passages and some of them may mislead the MRC model to make an incorrect prediction.",
"It's necessary to aggregate the information from different passages and choose the best one from those candidates.",
"Therefore, we propose a method to enable the answer candidates to exchange information and verify each other through the cross-passage answer verification process.",
"Given the representation of the answer candidates from all passages {r A i }, each answer candidate then attends to other candidates to collect supportive information via attention mechanism: s i,j = 0, if i = j, r A i · r A j , otherwise (13) α i,j = exp(s i,j )/ n k=1 exp(s i,k ) (14) r A i = n j=1 α i,j r A j (15) Herer A i is the collected verification information from other passages based on the attention weights.",
"Then we pass it together with the original representation r A i to a fully connected layer: g v i = w v [r A i ,r A i , r A i r A i ] (16) We further normalize these scores over all passages to get the verification score for answer candidate A i : p v i = exp(g v i )/ n j=1 exp(g v j ) (17) In order to train this verification model, we take the answer from the gold passage as the gold answer.",
"And the loss function can be formulated as the negative log probability of the correct answer: L verif y = − 1 N N i=1 log p v y v i (18) where y v i is the index of the correct answer in all the answer candidates of the i th instance .",
"Joint Training and Prediction As is described above, we define three objectives for the reading comprehension model over multiple passages: 1. finding the boundary of the answer; 2. predicting whether each word should be included in the content; 3. selecting the best answer via cross-passage answer verification.",
"According to our design, these three tasks can share the same embedding, encoding and matching layers.",
"Therefore, we propose to train them together as multi-task learning (Ruder, 2017) .",
"The joint objective function is formulated as follows: L = L boundary + β 1 L content + β 2 L verif y (19) where β 1 and β 2 are two hyper-parameters that control the weights of those tasks.",
"When predicting the final answer, we take the boundary score, content score and verification score into consideration.",
"We first extract the answer candidate A i that has the maximum boundary score from each passage i.",
"This boundary score is computed as the product of the start and end probability of the answer span.",
"Then for each answer candidate A i , we average the content probabilities of all its words as the content score of A i .",
"And we can also predict the verification score for A i using the verification model.",
"Therefore, the final answer can be selected from all the answer candidates according to the product of these three scores.",
"Experiments To verify the effectiveness of our model on multipassage machine reading comprehension, we conduct experiments on the MS-MARCO (Nguyen et al., 2016) and DuReader (He et al., 2017) datasets.",
"Our method achieves the state-of-the-art performance on both datasets.",
"Datasets We choose the MS-MARCO and DuReader datasets to test our method, since both of them are One prerequisite for answer verification is that there should be multiple correct answers so that they can verify each other.",
"Both the MS-MARCO and DuReader datasets require the human annotators to generate multiple answers if possible.",
"Table 2 shows the proportion of questions that have multiple answers.",
"However, the same answer that occurs many times is treated as one single answer here.",
"Therefore, we also report the proportion of questions that have multiple answer spans to match with the human-generated answers.",
"A span is taken as valid if it can achieve F1 score larger than 0.7 compared with any reference answer.",
"From these statistics, we can see that the phenomenon of multiple answers is quite common for both MS-MARCO and DuReader.",
"These answers will provide strong signals for answer verification if we can leverage them properly.",
"Implementation Details For MS-MARCO, we preprocess the corpus with the reversible tokenizer from Stanford CoreNLP and we choose the span that achieves the highest ROUGE-L score with the reference answers as the gold span for training.",
"We employ the 300-D pre-trained Glove embeddings (Pennington et al., 2014) and keep it fixed during training.",
"The character embeddings are randomly initialized with its dimension as 30.",
"For DuReader, we follow the preprocessing described in He et al.",
"(2017) .",
"We tune the hyper-parameters according to the Model ROUGE-L BLEU-1 FastQA Ext (Weissenborn et al., 2017) 33.67 33.93 Prediction (Wang and Jiang, 2016) 37.33 40.72 ReasoNet (Shen et al., 2017) 38.81 39.86 R-Net (Wang et al., 2017c) 42.89 42.22 S-Net (Tan et al., 2017) 45 Two simple yet effective technologies are employed to improve the final performance on these two datasets respectively.",
"For MS-MARCO, approximately 8% questions have the answers as Yes or No, which usually cannot be solved by extractive approach (Tan et al., 2017) .",
"We address this problem by training a simple Yes/No classifier for those questions with certain patterns (e.g., starting with \"is\").",
"Concretely, we simply change the output layer of the basic boundary model so that it can predict whether the answer is \"Yes\" or \"No\".",
"For DuReader, the retrieved document usually contains a large number of paragraphs that cannot be fed into MRC models directly (He et al., 2017) .",
"The original paper employs a simple a simple heuristic strategy to select a representative paragraph for each document, while we train a paragraph ranking model for this.",
"We will demonstrate the effects of these two technologies later.",
"Table 3 shows the results of our system and other state-of-the-art models on the MS-MARCO test set.",
"We adopt the official evaluation metrics, including ROUGE-L (Lin, 2004) and BLEU-1 (Papineni et al., 2002) .",
"As we can see, for both metrics, our single model outperforms all the other competing models with an evident margin, which is extremely hard considering the near-human per- formance.",
"If we ensemble the models trained with different random seeds and hyper-parameters, the results can be further improved and outperform the ensemble model in Tan et al.",
"(2017) , especially in terms of the BLEU-1.",
"Results on MS-MARCO Results on DuReader The results of our model and several baseline systems on the test set of DuReader are shown in Table 4 .",
"The BiDAF and Match-LSTM models are provided as two baseline systems (He et al., 2017) .",
"Based on BiDAF, as is described in Section 3.2, we tried a new paragraph selection strategy by employing a paragraph ranking (PR) model.",
"We can see that this paragraph ranking can boost the BiDAF baseline significantly.",
"Finally, we implement our system based on this new strategy, and our system (single model) achieves further improvement by a large margin.",
"Question: What is the difference between a mixed and pure culture Scores Answer Candidates: Boundary Content Verification [1] A culture is a society's total way of living and a society is a group .",
".",
".",
"1.0 × 10 −2 1.0 × 10 −1 1.1 × 10 −1 [2] The mixed economy is a balance between socialism and capitalism.",
"1.0 × 10 −4 4.0 × 10 −2 3.2 × 10 −2 [3] A pure culture is one in which only one kind of microbial species is .",
".",
".",
"5.5 × 10 −3 7.7 × 10 −2 1.2 × 10 −1 [4] A pure culture comprises a single species or strains.",
"A mixed .",
".",
".",
"2.7 × 10 −3 8.1 × 10 −2 1.3 × 10 −1 [5] A pure culture is a culture consisting of only one strain.",
"5.8 × 10 −4 7.9 × 10 −2 5.1 × 10 −2 [6] A pure culture is one in which only one kind of microbial species .",
".",
".",
"5.8 × 10 −3 9.1 × 10 −2 2.7 × 10 −1 .",
".",
".",
".",
".",
".",
".",
".",
".",
".",
".",
".",
"Analysis and Discussion Ablation Study To get better insight into our system, we conduct in-depth ablation study on the development set of MS-MARCO, which is shown in Table 5 .",
"Following Tan et al.",
"(2017) , we mainly focus on the ROUGE-L score that is averaged case by case.",
"We first evaluate the answer verification by ablating the cross-passage verification model so that the verification loss and verification score will not be used during training and testing.",
"Then we remove the content model in order to test the necessity of modeling the content of the answer.",
"Since we don't have the content scores, we use the boundary probabilities instead to compute the answer representation for verification.",
"Next, to show the benefits of joint training, we train the boundary model separately from the other two models.",
"Finally, we remove the yes/no classification in order to show the real improvement of our end-toend model compared with the baseline method that predicts the answer with only the boundary model.",
"From Table 5 , we can see that the answer verification makes a great contribution to the overall improvement, which confirms our hypothesis that cross-passage answer verification is useful for the multi-passage MRC.",
"For the ablation of the content model, we analyze that it will not only affect the content score itself, but also violate the verification model since the content probabilities are necessary for the answer representation, which will be further analyzed in Section 4.3.",
"Another discovery is that jointly training the three models can provide great benefits, which shows that the three tasks are actually closely related and can boost each other with shared representations at bottom layers.",
"At last, comparing our method with the baseline, we achieve an improvement of nearly 3 points without the yes/no classification.",
"This significant improvement proves the effectiveness of our approach.",
"Case Study To demonstrate how each module of our model takes effect when predicting the final answer, we conduct a case study in Table 6 with the same example that we discussed in Section 1.",
"For each answer candidate, we list three scores predicted by the boundary model, content model and verification model respectively.",
"On the one hand, we can see that these three scores generally have some relevance.",
"For example, the second candidate is given lowest scores by all the three models.",
"We analyze that this is because the models share the same encoding and matching layers at bottom level and this relevance guarantees that the content and verification models will not violate the boundary model too much.",
"On the other hand, we also see that the verification score can really make a difference here when the boundary model makes an incorrect decision among the confusing answer candidates ([1], [3], [4], [6] ).",
"Besides, as we expected, the verification model tends to give higher scores for those answers that have semantic commonality with each other ([3] , [4], [6] ), which are all valid answers in this case.",
"By multiplying the three scores, our model finally predicts the answer correctly.",
"Necessity of the Content Model In our model, we compute the answer representation based on the content probabilities predicted by a separate content model instead of directly using the boundary probabilities.",
"We argue that this content model is necessary for our answer verification process.",
"Figure 2 plots the predicted content probabilities as well as the boundary probabilities The noun charge unit has 1 sense : 1 .",
"a measure of the quantity of electricity -LRB-determined by the amount of an electric current and the time for which it flows -RRB-.",
"familiarity info : charge unit used as a noun is very rare .",
"start probability end probability content probability Figure 2 : The boundary probabilities and content probabilities for the words in a passage for a passage.",
"We can see that the boundary and content probabilities capture different aspects of the answer.",
"Since answer candidates usually have similar boundary words, if we compute the answer representation based on the boundary probabilities, it's difficult to model the real difference among different answer candidates.",
"On the contrary, with the content probabilities, we pay more attention to the content part of the answer, which can provide more distinguishable information for verifying the correct answer.",
"Furthermore, the content probabilities can also adjust the weights of the words within the answer span so that unimportant words (e.g.",
"\"and\" and \".\")",
"get lower weights in the final answer representation.",
"We believe that this refined representation is also good for the answer verification process.",
"Related Work Machine reading comprehension made rapid progress in recent years, especially for singlepassage MRC task, such as SQuAD (Rajpurkar et al., 2016) .",
"Mainstream studies (Seo et al., 2016; Wang and Jiang, 2016; Xiong et al., 2016) treat reading comprehension as extracting answer span from the given passage, which is usually achieved by predicting the start and end position of the answer.",
"We implement our boundary model similarly by employing the boundary-based pointer network (Wang and Jiang, 2016) .",
"Another inspiring work is from Wang et al.",
"(2017c) , where the authors propose to match the passage against itself so that the representation can aggregate evidence from the whole passage.",
"Our verification model adopts a similar idea.",
"However, we collect information across passages and our attention is based on the answer representation, which is much more efficient than attention over all passages.",
"For the model training, Xiong et al.",
"(2017) argues that the boundary loss encourages exact answers at the cost of penalizing overlapping answers.",
"Therefore they propose a mixed objective that incorporates rewards derived from word overlap.",
"Our joint training approach has a similar function.",
"By taking the content and verification loss into consideration, our model will give less loss for overlapping answers than those unmatched answers, and our loss function is totally differentiable.",
"Recently, we also see emerging interests in multi-passage MRC from both the academic (Dunn et al., 2017; Joshi et al., 2017) and industrial community (Nguyen et al., 2016; He et al., 2017) .",
"Early studies (Shen et al., 2017; Wang et al., 2017c) usually concat those passages and employ the same models designed for singlepassage MRC.",
"However, more and more latest studies start to design specific methods that can read multiple passages more effectively.",
"In the aspect of passage selection, Wang et al.",
"(2017a) introduced a pipelined approach that rank the passages first and then read the selected passages for answering questions.",
"Tan et al.",
"(2017) treats the passage ranking as an auxiliary task that can be trained jointly with the reading comprehension model.",
"Actually, the target of our answer verification is very similar to that of the passage selection, while we pay more attention to the answer content and the answer verification process.",
"Speaking of the answer verification, Wang et al.",
"(2017b) has a similar motivation to ours.",
"They attempt to aggregate the evidence from different passages and choose the final answer from n-best candidates.",
"However, they implement their idea as a separate reranking step after reading comprehension, while our answer verification is a component of the whole model that can be trained end-to-end.",
"Conclusion In this paper, we propose an end-to-end framework to tackle the multi-passage MRC task .",
"We creatively design three different modules in our model, which can find the answer boundary, model the answer content and conduct cross-passage answer verification respectively.",
"All these three modules can be trained with different forms of the answer labels and training them jointly can provide further improvement.",
"The experimental results demonstrate that our model outperforms the baseline models by a large margin and achieves the state-of-the-art performance on two challenging datasets, both of which are designed for MRC on real web data."
]
} | {
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.3",
"2.4",
"2.5",
"3",
"3.1",
"3.2",
"3.4",
"4.1",
"4.2",
"4.3",
"5",
"6"
],
"paper_header_content": [
"Introduction",
"Question and Passage Modeling",
"Answer Boundary Prediction",
"Answer Content Modeling",
"Cross-Passage Answer Verification",
"Joint Training and Prediction",
"Experiments",
"Datasets",
"Implementation Details",
"Results on DuReader",
"Ablation Study",
"Case Study",
"Necessity of the Content Model",
"Related Work",
"Conclusion"
]
} | GEM-SciDuet-train-28#paper-1035#slide-16 | Visualization of the Probability Distribution | content probability end probability start probability
zB se pasn yun asaey oyur
-mM- SMOT yorya 40y au aw pue
sey yun asaey unou ouL -aad- unou -aaT- yun asaey | content probability end probability start probability
zB se pasn yun asaey oyur
-mM- SMOT yorya 40y au aw pue
sey yun asaey unou ouL -aad- unou -aaT- yun asaey | [] |
GEM-SciDuet-train-28#paper-1035#slide-17 | 1035 | Multi-Passage Machine Reading Comprehension with Cross-Passage Answer Verification | Machine reading comprehension (MRC) on real web data usually requires the machine to answer a question by analyzing multiple passages retrieved by search engine. Compared with MRC on a single passage, multi-passage MRC is more challenging, since we are likely to get multiple confusing answer candidates from different passages. To address this problem, we propose an end-to-end neural model that enables those answer candidates from different passages to verify each other based on their content representations. Specifically, we jointly train three modules that can predict the final answer based on three factors: the answer boundary, the answer content and the cross-passage answer verification. The experimental results show that our method outperforms the baseline by a large margin and achieves the state-of-the-art performance on the English MS-MARCO dataset and the Chinese DuReader dataset, both of which are designed for MRC in real-world settings. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231
],
"paper_content_text": [
"Introduction Machine reading comprehension (MRC), empowering computers with the ability to acquire knowledge and answer questions from textual data, is believed to be a crucial step in building a general intelligent agent (Chen et al., 2016) .",
"Recent years have seen rapid growth in the MRC community.",
"With the release of various datasets, the MRC task has evolved from the early cloze-style test (Hermann et al., 2015; Hill et al., 2015) to answer extraction from a single passage (Rajpurkar et al., 2016) and to the latest more complex question answering on web data (Nguyen et al., 2016; Dunn et al., 2017; He et al., 2017) .",
"Great efforts have also been made to develop models for these MRC tasks , especially for the answer extraction on single passage (Wang and Jiang, 2016; Seo et al., 2016; Pan et al., 2017) .",
"A significant milestone is that several MRC models have exceeded the performance of human annotators on the SQuAD dataset 1 (Rajpurkar et al., 2016 ).",
"However, this success on single Wikipedia passage is still not adequate, considering the ultimate goal of reading the whole web.",
"Therefore, several latest datasets (Nguyen et al., 2016; He et al., 2017; Dunn et al., 2017) attempt to design the MRC tasks in more realistic settings by involving search engines.",
"For each question, they use the search engine to retrieve multiple passages and the MRC models are required to read these passages in order to give the final answer.",
"One of the intrinsic challenges for such multipassage MRC is that since all the passages are question-related but usually independently written, it's probable that multiple confusing answer candidates (correct or incorrect) exist.",
"Table 1 shows an example from MS-MARCO.",
"We can see that all the answer candidates have semantic matching with the question while they are literally different and some of them are even incorrect.",
"As is shown by Jia and Liang (2017) , these confusing answer candidates could be quite difficult for MRC models to distinguish.",
"Therefore, special consideration is required for such multi-passage MRC problem.",
"In this paper, we propose to leverage the answer candidates from different passages to verify the final correct answer and rule out the noisy incorrect answers.",
"Our hypothesis is that the cor-Question: What is the difference between a mixed and pure culture?",
"Passages: [1] A culture is a society's total way of living and a society is a group that live in a defined territory and participate in common culture.",
"While the answer given is in essence true, societies originally form for the express purpose to enhance .",
".",
".",
"[2] .",
".",
".",
"There has been resurgence in the economic system known as capitalism during the past two decades.",
"4.",
"The mixed economy is a balance between socialism and capitalism.",
"As a result, some institutions are owned and maintained by .",
".",
".",
"[3] A pure culture is one in which only one kind of microbial species is found whereas in mixed culture two or more microbial species formed colonies.",
"Culture on the other hand, is the lifestyle that the people in the country .",
".",
".",
"[4] Best Answer: A pure culture comprises a single species or strains.",
"A mixed culture is taken from a source and may contain multiple strains or species.",
"A contaminated culture contains organisms that derived from some place .",
".",
".",
"[5] .",
".",
".",
"It will be at that time when we can truly obtain a pure culture.",
"A pure culture is a culture consisting of only one strain.",
"You can obtain a pure culture by picking out a small portion of the mixed culture .",
".",
".",
"[6] A pure culture is one in which only one kind of microbial species is found whereas in mixed culture two or more microbial species formed colonies.",
"A pure culture is a culture consisting of only one strain.",
".",
".",
".",
"· · · · · · Reference Answer: A pure culture is one in which only one kind of microbial species is found whereas in mixed culture two or more microbial species formed colonies.",
"rect answers could occur more frequently in those passages and usually share some commonalities, while incorrect answers are usually different from one another.",
"The example in Table 1 demonstrates this phenomenon.",
"We can see that the answer candidates extracted from the last four passages are all valid answers to the question and they are semantically similar to each other, while the answer candidates from the other two passages are incorrect and there is no supportive information from other passages.",
"As human beings usually compare the answer candidates from different sources to deduce the final answer, we hope that MRC model can also benefit from the cross-passage answer verification process.",
"The overall framework of our model is demonstrated in Figure 1 , which consists of three modules.",
"First, we follow the boundary-based MRC models (Seo et al., 2016; Wang and Jiang, 2016) to find an answer candidate for each passage by identifying the start and end position of the answer ( Figure 2) .",
"Second, we model the meanings of the answer candidates extracted from those passages and use the content scores to measure the quality of the candidates from a second perspective.",
"Third, we conduct the answer verification by enabling each answer candidate to attend to the other candidates based on their representations.",
"We hope that the answer candidates can collect supportive information from each other according to their semantic similarities and further decide whether each candidate is correct or not.",
"Therefore, the final answer is determined by three factors: the boundary, the content and the crosspassage answer verification.",
"The three steps are modeled using different modules, which can be jointly trained in our end-to-end framework.",
"We conduct extensive experiments on the MS-MARCO (Nguyen et al., 2016) and DuReader (He et al., 2017) datasets.",
"The results show that our answer verification MRC model outperforms the baseline models by a large margin and achieves the state-of-the-art performance on both datasets.",
"Figure 1 gives an overview of our multi-passage MRC model which is mainly composed of three modules including answer boundary prediction, answer content modeling and answer verification.",
"First of all, we need to model the question and passages.",
"Following Seo et al.",
"(2016) , we compute the question-aware representation for each passage (Section 2.1).",
"Based on this representation, we employ a Pointer Network (Vinyals et al., 2015) to predict the start and end position of the answer in the module of answer boundary prediction (Section 2.2).",
"At the same time, with the answer content model (Section 2.3), we estimate whether each word should be included in the answer and thus obtain the answer representations.",
"Next, in the answer verification module (Section 2.4), each answer candidate can attend to the other answer candidates to collect supportive information and we compute one score for each candidate Figure 1 : Overview of our method for multi-passage machine reading comprehension to indicate whether it is correct or not according to the verification.",
"The final answer is determined by not only the boundary but also the answer content and its verification score (Section 2.5).",
"Our Approach Question and Passage Modeling Given a question Q and a set of passages {P i } retrieved by search engines, our task is to find the best concise answer to the question.",
"First, we formally present the details of modeling the question and passages.",
"Encoding We first map each word into the vector space by concatenating its word embedding and sum of its character embeddings.",
"Then we employ bi-directional LSTMs (BiLSTM) to encode the question Q and passages {P i } as follows: u Q t = BiLSTM Q (u Q t−1 , [e Q Q-P Matching One essential step in MRC is to match the question with passages so that important information can be highlighted.",
"We use the Attention Flow Layer (Seo et al., 2016) to conduct the Q-P matching in two directions.",
"The similarity matrix S ∈ R |Q|×|P i | between the question and passage i is changed to a simpler version, where the similarity between the t th word in the question and the k th word in passage i is computed as: S t,k = u Q t · u P i k (3) Then the context-to-question attention and question-to-context attention is applied strictly following Seo et al.",
"(2016) to obtain the questionaware passage representation {ũ P i t }.",
"We do not give the details here due to space limitation.",
"Next, another BiLSTM is applied in order to fuse the contextual information and get the new representation for each word in the passage, which is regarded as the match output: v P i t = BiLSTM M (v P i t−1 ,ũ P i t ) (4) Based on the passage representations, we introduce the three main modules of our model.",
"Answer Boundary Prediction To extract the answer span from passages, mainstream studies try to locate the boundary of the answer, which is called boundary model.",
"Following (Wang and Jiang, 2016) , we employ Pointer Network (Vinyals et al., 2015) to compute the probability of each word to be the start or end position of the span: g t k = w a 1 tanh(W a 2 [v P k , h a t−1 ]) (5) α t k = exp(g t k )/ |P| j=1 exp(g t j ) (6) c t = |P| k=1 α t k v P k (7) h a t = LSTM(h a t−1 , c t ) (8) By utilizing the attention weights, the probability of the k th word in the passage to be the start and end position of the answer is obtained as α 1 k and α 2 k .",
"It should be noted that the pointer network is applied to the concatenation of all passages, which is denoted as P so that the probabilities are comparable across passages.",
"This boundary model can be trained by minimizing the negative log probabilities of the true start and end indices: L boundary = − 1 N N i=1 (log α 1 y 1 i + log α 2 y 2 i ) (9) where N is the number of samples in the dataset and y 1 i , y 2 i are the gold start and end positions.",
"Answer Content Modeling Previous work employs the boundary model to find the text span with the maximum boundary score as the final answer.",
"However, in our context, besides locating the answer candidates, we also need to model their meanings in order to conduct the verification.",
"An intuitive method is to compute the representation of the answer candidates separately after extracting them, but it could be hard to train such model end-to-end.",
"Here, we propose a novel method that can obtain the representation of the answer candidates based on probabilities.",
"Specifically, we change the output layer of the classic MRC model.",
"Besides predicting the boundary probabilities for the words in the passages, we also predict whether each word should be included in the content of the answer.",
"The content probability of the k th word is computed as: p c k = sigmoid(w c 1 ReLU(W c 2 v P i k )) (10) Training this content model is also quite intuitive.",
"We transform the boundary labels into a continuous segment, which means the words within the answer span will be labeled as 1 and other words will be labeled as 0.",
"In this way, we define the loss function as the averaged cross entropy: L content = − 1 N 1 |P| N i=1 |P | j=1 [y c k log p c k + (1 − y c k ) log(1 − p c k )] (11) The content probabilities provide another view to measure the quality of the answer in addition to the boundary.",
"Moreover, with these probabilities, we can represent the answer from passage i as a weighted sum of all the word embeddings in this passage: r A i = 1 |P i | |P i | k=1 p c k [e P i k , c P i k ] (12) Cross-Passage Answer Verification The boundary model and the content model focus on extracting and modeling the answer within a single passage respectively, with little consideration of the cross-passage information.",
"However, as is discussed in Section 1, there could be multiple answer candidates from different passages and some of them may mislead the MRC model to make an incorrect prediction.",
"It's necessary to aggregate the information from different passages and choose the best one from those candidates.",
"Therefore, we propose a method to enable the answer candidates to exchange information and verify each other through the cross-passage answer verification process.",
"Given the representation of the answer candidates from all passages {r A i }, each answer candidate then attends to other candidates to collect supportive information via attention mechanism: s i,j = 0, if i = j, r A i · r A j , otherwise (13) α i,j = exp(s i,j )/ n k=1 exp(s i,k ) (14) r A i = n j=1 α i,j r A j (15) Herer A i is the collected verification information from other passages based on the attention weights.",
"Then we pass it together with the original representation r A i to a fully connected layer: g v i = w v [r A i ,r A i , r A i r A i ] (16) We further normalize these scores over all passages to get the verification score for answer candidate A i : p v i = exp(g v i )/ n j=1 exp(g v j ) (17) In order to train this verification model, we take the answer from the gold passage as the gold answer.",
"And the loss function can be formulated as the negative log probability of the correct answer: L verif y = − 1 N N i=1 log p v y v i (18) where y v i is the index of the correct answer in all the answer candidates of the i th instance .",
"Joint Training and Prediction As is described above, we define three objectives for the reading comprehension model over multiple passages: 1. finding the boundary of the answer; 2. predicting whether each word should be included in the content; 3. selecting the best answer via cross-passage answer verification.",
"According to our design, these three tasks can share the same embedding, encoding and matching layers.",
"Therefore, we propose to train them together as multi-task learning (Ruder, 2017) .",
"The joint objective function is formulated as follows: L = L boundary + β 1 L content + β 2 L verif y (19) where β 1 and β 2 are two hyper-parameters that control the weights of those tasks.",
"When predicting the final answer, we take the boundary score, content score and verification score into consideration.",
"We first extract the answer candidate A i that has the maximum boundary score from each passage i.",
"This boundary score is computed as the product of the start and end probability of the answer span.",
"Then for each answer candidate A i , we average the content probabilities of all its words as the content score of A i .",
"And we can also predict the verification score for A i using the verification model.",
"Therefore, the final answer can be selected from all the answer candidates according to the product of these three scores.",
"Experiments To verify the effectiveness of our model on multipassage machine reading comprehension, we conduct experiments on the MS-MARCO (Nguyen et al., 2016) and DuReader (He et al., 2017) datasets.",
"Our method achieves the state-of-the-art performance on both datasets.",
"Datasets We choose the MS-MARCO and DuReader datasets to test our method, since both of them are One prerequisite for answer verification is that there should be multiple correct answers so that they can verify each other.",
"Both the MS-MARCO and DuReader datasets require the human annotators to generate multiple answers if possible.",
"Table 2 shows the proportion of questions that have multiple answers.",
"However, the same answer that occurs many times is treated as one single answer here.",
"Therefore, we also report the proportion of questions that have multiple answer spans to match with the human-generated answers.",
"A span is taken as valid if it can achieve F1 score larger than 0.7 compared with any reference answer.",
"From these statistics, we can see that the phenomenon of multiple answers is quite common for both MS-MARCO and DuReader.",
"These answers will provide strong signals for answer verification if we can leverage them properly.",
"Implementation Details For MS-MARCO, we preprocess the corpus with the reversible tokenizer from Stanford CoreNLP and we choose the span that achieves the highest ROUGE-L score with the reference answers as the gold span for training.",
"We employ the 300-D pre-trained Glove embeddings (Pennington et al., 2014) and keep it fixed during training.",
"The character embeddings are randomly initialized with its dimension as 30.",
"For DuReader, we follow the preprocessing described in He et al.",
"(2017) .",
"We tune the hyper-parameters according to the Model ROUGE-L BLEU-1 FastQA Ext (Weissenborn et al., 2017) 33.67 33.93 Prediction (Wang and Jiang, 2016) 37.33 40.72 ReasoNet (Shen et al., 2017) 38.81 39.86 R-Net (Wang et al., 2017c) 42.89 42.22 S-Net (Tan et al., 2017) 45 Two simple yet effective technologies are employed to improve the final performance on these two datasets respectively.",
"For MS-MARCO, approximately 8% questions have the answers as Yes or No, which usually cannot be solved by extractive approach (Tan et al., 2017) .",
"We address this problem by training a simple Yes/No classifier for those questions with certain patterns (e.g., starting with \"is\").",
"Concretely, we simply change the output layer of the basic boundary model so that it can predict whether the answer is \"Yes\" or \"No\".",
"For DuReader, the retrieved document usually contains a large number of paragraphs that cannot be fed into MRC models directly (He et al., 2017) .",
"The original paper employs a simple a simple heuristic strategy to select a representative paragraph for each document, while we train a paragraph ranking model for this.",
"We will demonstrate the effects of these two technologies later.",
"Table 3 shows the results of our system and other state-of-the-art models on the MS-MARCO test set.",
"We adopt the official evaluation metrics, including ROUGE-L (Lin, 2004) and BLEU-1 (Papineni et al., 2002) .",
"As we can see, for both metrics, our single model outperforms all the other competing models with an evident margin, which is extremely hard considering the near-human per- formance.",
"If we ensemble the models trained with different random seeds and hyper-parameters, the results can be further improved and outperform the ensemble model in Tan et al.",
"(2017) , especially in terms of the BLEU-1.",
"Results on MS-MARCO Results on DuReader The results of our model and several baseline systems on the test set of DuReader are shown in Table 4 .",
"The BiDAF and Match-LSTM models are provided as two baseline systems (He et al., 2017) .",
"Based on BiDAF, as is described in Section 3.2, we tried a new paragraph selection strategy by employing a paragraph ranking (PR) model.",
"We can see that this paragraph ranking can boost the BiDAF baseline significantly.",
"Finally, we implement our system based on this new strategy, and our system (single model) achieves further improvement by a large margin.",
"Question: What is the difference between a mixed and pure culture Scores Answer Candidates: Boundary Content Verification [1] A culture is a society's total way of living and a society is a group .",
".",
".",
"1.0 × 10 −2 1.0 × 10 −1 1.1 × 10 −1 [2] The mixed economy is a balance between socialism and capitalism.",
"1.0 × 10 −4 4.0 × 10 −2 3.2 × 10 −2 [3] A pure culture is one in which only one kind of microbial species is .",
".",
".",
"5.5 × 10 −3 7.7 × 10 −2 1.2 × 10 −1 [4] A pure culture comprises a single species or strains.",
"A mixed .",
".",
".",
"2.7 × 10 −3 8.1 × 10 −2 1.3 × 10 −1 [5] A pure culture is a culture consisting of only one strain.",
"5.8 × 10 −4 7.9 × 10 −2 5.1 × 10 −2 [6] A pure culture is one in which only one kind of microbial species .",
".",
".",
"5.8 × 10 −3 9.1 × 10 −2 2.7 × 10 −1 .",
".",
".",
".",
".",
".",
".",
".",
".",
".",
".",
".",
"Analysis and Discussion Ablation Study To get better insight into our system, we conduct in-depth ablation study on the development set of MS-MARCO, which is shown in Table 5 .",
"Following Tan et al.",
"(2017) , we mainly focus on the ROUGE-L score that is averaged case by case.",
"We first evaluate the answer verification by ablating the cross-passage verification model so that the verification loss and verification score will not be used during training and testing.",
"Then we remove the content model in order to test the necessity of modeling the content of the answer.",
"Since we don't have the content scores, we use the boundary probabilities instead to compute the answer representation for verification.",
"Next, to show the benefits of joint training, we train the boundary model separately from the other two models.",
"Finally, we remove the yes/no classification in order to show the real improvement of our end-toend model compared with the baseline method that predicts the answer with only the boundary model.",
"From Table 5 , we can see that the answer verification makes a great contribution to the overall improvement, which confirms our hypothesis that cross-passage answer verification is useful for the multi-passage MRC.",
"For the ablation of the content model, we analyze that it will not only affect the content score itself, but also violate the verification model since the content probabilities are necessary for the answer representation, which will be further analyzed in Section 4.3.",
"Another discovery is that jointly training the three models can provide great benefits, which shows that the three tasks are actually closely related and can boost each other with shared representations at bottom layers.",
"At last, comparing our method with the baseline, we achieve an improvement of nearly 3 points without the yes/no classification.",
"This significant improvement proves the effectiveness of our approach.",
"Case Study To demonstrate how each module of our model takes effect when predicting the final answer, we conduct a case study in Table 6 with the same example that we discussed in Section 1.",
"For each answer candidate, we list three scores predicted by the boundary model, content model and verification model respectively.",
"On the one hand, we can see that these three scores generally have some relevance.",
"For example, the second candidate is given lowest scores by all the three models.",
"We analyze that this is because the models share the same encoding and matching layers at bottom level and this relevance guarantees that the content and verification models will not violate the boundary model too much.",
"On the other hand, we also see that the verification score can really make a difference here when the boundary model makes an incorrect decision among the confusing answer candidates ([1], [3], [4], [6] ).",
"Besides, as we expected, the verification model tends to give higher scores for those answers that have semantic commonality with each other ([3] , [4], [6] ), which are all valid answers in this case.",
"By multiplying the three scores, our model finally predicts the answer correctly.",
"Necessity of the Content Model In our model, we compute the answer representation based on the content probabilities predicted by a separate content model instead of directly using the boundary probabilities.",
"We argue that this content model is necessary for our answer verification process.",
"Figure 2 plots the predicted content probabilities as well as the boundary probabilities The noun charge unit has 1 sense : 1 .",
"a measure of the quantity of electricity -LRB-determined by the amount of an electric current and the time for which it flows -RRB-.",
"familiarity info : charge unit used as a noun is very rare .",
"start probability end probability content probability Figure 2 : The boundary probabilities and content probabilities for the words in a passage for a passage.",
"We can see that the boundary and content probabilities capture different aspects of the answer.",
"Since answer candidates usually have similar boundary words, if we compute the answer representation based on the boundary probabilities, it's difficult to model the real difference among different answer candidates.",
"On the contrary, with the content probabilities, we pay more attention to the content part of the answer, which can provide more distinguishable information for verifying the correct answer.",
"Furthermore, the content probabilities can also adjust the weights of the words within the answer span so that unimportant words (e.g.",
"\"and\" and \".\")",
"get lower weights in the final answer representation.",
"We believe that this refined representation is also good for the answer verification process.",
"Related Work Machine reading comprehension made rapid progress in recent years, especially for singlepassage MRC task, such as SQuAD (Rajpurkar et al., 2016) .",
"Mainstream studies (Seo et al., 2016; Wang and Jiang, 2016; Xiong et al., 2016) treat reading comprehension as extracting answer span from the given passage, which is usually achieved by predicting the start and end position of the answer.",
"We implement our boundary model similarly by employing the boundary-based pointer network (Wang and Jiang, 2016) .",
"Another inspiring work is from Wang et al.",
"(2017c) , where the authors propose to match the passage against itself so that the representation can aggregate evidence from the whole passage.",
"Our verification model adopts a similar idea.",
"However, we collect information across passages and our attention is based on the answer representation, which is much more efficient than attention over all passages.",
"For the model training, Xiong et al.",
"(2017) argues that the boundary loss encourages exact answers at the cost of penalizing overlapping answers.",
"Therefore they propose a mixed objective that incorporates rewards derived from word overlap.",
"Our joint training approach has a similar function.",
"By taking the content and verification loss into consideration, our model will give less loss for overlapping answers than those unmatched answers, and our loss function is totally differentiable.",
"Recently, we also see emerging interests in multi-passage MRC from both the academic (Dunn et al., 2017; Joshi et al., 2017) and industrial community (Nguyen et al., 2016; He et al., 2017) .",
"Early studies (Shen et al., 2017; Wang et al., 2017c) usually concat those passages and employ the same models designed for singlepassage MRC.",
"However, more and more latest studies start to design specific methods that can read multiple passages more effectively.",
"In the aspect of passage selection, Wang et al.",
"(2017a) introduced a pipelined approach that rank the passages first and then read the selected passages for answering questions.",
"Tan et al.",
"(2017) treats the passage ranking as an auxiliary task that can be trained jointly with the reading comprehension model.",
"Actually, the target of our answer verification is very similar to that of the passage selection, while we pay more attention to the answer content and the answer verification process.",
"Speaking of the answer verification, Wang et al.",
"(2017b) has a similar motivation to ours.",
"They attempt to aggregate the evidence from different passages and choose the final answer from n-best candidates.",
"However, they implement their idea as a separate reranking step after reading comprehension, while our answer verification is a component of the whole model that can be trained end-to-end.",
"Conclusion In this paper, we propose an end-to-end framework to tackle the multi-passage MRC task .",
"We creatively design three different modules in our model, which can find the answer boundary, model the answer content and conduct cross-passage answer verification respectively.",
"All these three modules can be trained with different forms of the answer labels and training them jointly can provide further improvement.",
"The experimental results demonstrate that our model outperforms the baseline models by a large margin and achieves the state-of-the-art performance on two challenging datasets, both of which are designed for MRC on real web data."
]
} | {
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.3",
"2.4",
"2.5",
"3",
"3.1",
"3.2",
"3.4",
"4.1",
"4.2",
"4.3",
"5",
"6"
],
"paper_header_content": [
"Introduction",
"Question and Passage Modeling",
"Answer Boundary Prediction",
"Answer Content Modeling",
"Cross-Passage Answer Verification",
"Joint Training and Prediction",
"Experiments",
"Datasets",
"Implementation Details",
"Results on DuReader",
"Ablation Study",
"Case Study",
"Necessity of the Content Model",
"Related Work",
"Conclusion"
]
} | GEM-SciDuet-train-28#paper-1035#slide-17 | Conclusion | * Multi-passage MRC: much more misleading answers
* End-to-end model for multi-passage MRC:
Find the answer boundary
* Model the answer content
* Cross-passage answer verification
Joint training and prediction
SOTA performance on two datasets created from real-world web data: | * Multi-passage MRC: much more misleading answers
* End-to-end model for multi-passage MRC:
Find the answer boundary
* Model the answer content
* Cross-passage answer verification
Joint training and prediction
SOTA performance on two datasets created from real-world web data: | [] |
GEM-SciDuet-train-29#paper-1038#slide-0 | 1038 | Minoan linguistic resources: The Linear A Digital Corpus | This paper describes the Linear A/Minoan digital corpus and the approaches we applied to develop it. We aim to set up a suitable study resource for Linear A and Minoan. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168
],
"paper_content_text": [
"Firstly we start by introducing Linear A and Minoan in order to make it clear why we should develop a digital marked up corpus of the existing Linear A transcriptions.",
"Secondly we list and describe some of the existing resources about Linear A: Linear A documents (seals, statuettes, vessels etc.",
"), the traditional encoding systems (standard code numbers referring to distinct symbols), a Linear A font, and the newest (released on June 16th 2014) Unicode Standard Characters set for Linear A. Thirdly we explain our choice concerning the data format: why we decided to digitize the Linear A resources; why we decided to convert all the transcriptions in standard Unicode characters; why we decided to use an XML format; why we decided to implement the TEI-EpiDoc DTD.",
"Lastly we describe: the developing process (from the data collection to the issues we faced and the solving strategies); a new font we developed (synchronized with the Unicode Characters Set) in order to make the data readable even on systems that are not updated.",
"Finally, we discuss the corpus we developed in a Cultural Heritage preservation perspective and suggest some future works.",
"Introduction to Linear A and Minoan Linear A is the script used by the Minoan Civilization (Cotterell, 1980) from 2500 to 1450 BC.",
"Writing system Time span Cretan Hieroglyphic 2100 -1700 BC Linear A 2500 -1450 BC Linear B 1450 -1200 BC The Minoan Civilization arose on the island of Crete in the Aegean Sea during the Bronze Age.",
"Minoan ruins and artifacts have been found mainly in Crete but also in other Greek islands and in mainland Greece, in Bulgaria, in Turkey and in Israel.",
"Linear A is not used anymore and, even after decades of studies (it was discovered by Sir Arthur Evans around 1900 (Evans, 1909) ), it still remains undeciphered.",
"All the assumptions and hypotheses made about Linear A and Minoan (its underlying language) are mainly based on the comparison with the well known Linear B, the famous child system originated by Linear A.",
"In fact, Linear B was fully deciphered during the 1950s by Michael Ventris 1 and was found to encode an ancient Greek dialect used by the Mycenaean civilization.",
"Archaeologist Arthur Evans named the script 'Linear' because it consisted just of lines inscribed in clay (Robinson, 2009) There have been several attempts to decipher Linear A and the Minoan Language.",
"We can divide the underlying hypotheses in six groups: Greek-like language (Nagy, 1963) , distinct Indo-European branch (Owens, 1999) , Anatolian language close to Luwian (Palmer, 1958) , archaic form of Phoenician (Dietrich and Loretz, 2001) , Indo-Iranian (Faure, 1998) and Etruscan-like language (Giulio M. Facchetti and Negri, 2003) .",
"There is also an interesting attempt (Younger, 2000b) to decipher single words, specifically toponyms, by applying Linear B phonetic values to the symbols shared by both Linear A and Linear B and following the assumption that toponyms are much more likely to survive as loans in Mycenaean Greek (written in Linear B); we show an example of this approach in Table 2.",
"In the next sections we describe the available existing resources concerning Linear A and the Linear A Digital Corpus: why and how we developed it.",
"Linear A available resources Even if Linear A and Linear B were discovered more than one century ago, Linear A has not been deciphered yet.",
"Nevertheless, many scholars worked on collecting and organizing all the available data in order to study and to decipher the script and the language.",
"Probably due to the fact that only historical linguists, philologists and archaeologists attempted to collect and organize all the existing data, nowadays a rich and well organized digital corpus is still not available.",
"In this section we describe all the available Linear A resources, including both physical documents and digital data.",
"Table 3 : Indexed types of support (Younger, 2000e) .",
"Linear A documents Linear A was written on a variety of media, such as stone offering tables, gold and silver hair pins, and pots (inked and inscribed).",
"The clay documents consist of tablets, roundels, and sealings (one-hole, two-hole, and flat-based).",
"Roundels are related to a \"conveyance of a commodity, either within the central administration or between the central administration and an external party\" (Palmer, 1995; Schoep, 2002) .",
"The roundel is the record of this transaction that stays within the central administration as the commodity moves out of the transacting bureau (Hallager, 1996) .",
"Two-hole sealings probably dangled from commodities brought into the center; onehole sealings apparently dangled from papyrus/parchment documents; flat-based sealings (themselves never inscribed) were pressed against the twine that secured papyrus/parchment documents (Younger, 2000g; Schoep, 2002) as shown by photographs (Müller, 1999) , (Müller, 2002) of the imprints that survive on the underside of flat-based sealings.",
"There are 1,427 Linear A documents containing 7,362-7,396 signs, much less than the quantity of data we have for Linear B (more than 4,600 documents containing 57,398 signs) (Younger, 2000f) .",
"Godart and Olivier's Collection of Linear A Inscriptions There is a complete and organized collection of Linear A documents on a paper corpus, the GORILA Louis Godart and Jean-Pierre Olivier, Recueil des inscriptions en Linéaire A (Godart and Olivier, 1976) .",
"Godart and Olivier have indexed the documents by original location and type of support, following the Raison-Pope Index (Raison and Pope, 1971) .",
"For example, the document AP Za 1 is from AP = Apodoulou and the support type is Za = stone vessels as shown in Table 3 .",
"Younger (2000h) provides a map with all the Cretan sites and one with all the Greek non-Cretan sites (Younger, 2000i) .",
"Godart and Olivier also provide referential data about conservation places (mainly museums), and periodization (for example: EM II = Second Early Minoan).",
"Since 1976, this has been the main source of data and point of reference about Linear A documents and it has set up the basis for further studies.",
"Even recent corpora, such as the Corpus transnuméré du linéaire A (Raison and Pope, 1994) , always refer to GORILA precise volumes and pages describing each document.",
"John G. Younger's website Younger (2000j) has published a website that is the best digital resource available (there is another interesting project, never completed, on Yannis Deliyannis's website 2 ).",
"It collects most of the existing inscriptions (taking GORILA as main source of data and point of reference) transcribed as Linear B phonetic values (like the KU-NI-SU transcription above).",
"The transcriptions are kept up to date and a complete restructuring in June 2015 has been announced (Younger, 2000j) .",
"GORILA symbols catalogue Many transcription systems have been defined.",
"The first one has been proposed by Raison and Pope (1971) and uses a string composed by one or two characters (Lm, L or Lc depending on the symbol, respectively metric, phonetic or compound) followed by a number, for example: L2.",
"This system has been widely used by many scholars such as David Woodley Packard (president of the Packard Humanities Institute 3 ), Colin Renfrew and Richard Janko (Packard, 1974; Renfrew, 1977; Janko, 1982) .",
"The second one, used in the GORILA collection (Godart and Olivier, 1976 ) and on John G. Younger's website, consists of a string composed by one or two characters (AB if the symbol is shared by Linear A and Linear B, A if the symbol is only used in Linear A) followed by a number and eventually other alphabetical characters (due to addenda and corrigenda to earlier versions), for example: AB03.",
"Many scholars transcribe the symbols shared by Linear A and B with the assumed phonetical/syllabic transcription.",
"This syllabic transcription is based on the corresponding Linear B phonetic values.",
"Younger (2000a) provides a conversion table of Pope and Raison's transcription system, GO-RILA's transcription system and his own phonetic/syllabic transcription system.",
"Developing our corpus, we worked mainly on Younger's syllabic and GORILA transcriptions, because the Unicode Linear A encoding is broadly based on the GORILA catalogue, which is also the basic set of characters used in decipherment efforts 4 .",
"We provide an example of different transcriptions for the same symbol in Table 4 .",
"As can be noticed, the Unicode encoding is based on the GORILA transcription system.",
"Linear A Font The best Linear A Font available is LA.ttf, released by D.W. Borgdorff 5 in 2004.",
"In this font some arbitrary Unicode positions for Latin characters are mapped to Linear A symbols.",
"On one hand this allows the user to type Linear A symbols directly by pressing the keys on the keyboard; on the other hand, only transliterations can be produced.",
"The text eventually typed internally will be a series of Latin characters.",
"It should be remarked that this font would not be useful to make readable a Linear A corpus that is non-translittered and encoded in Unicode.",
"Unicode Linear A Characters Set On June 16th 2014, Version 7.0 of Unicode standard was released 6 , adding 2,834 new characters and including, finally, the Linear A character set.",
"Linear A block has been set in the range 10600-1077F and the order mainly follows GORILA's one 7 , as seen in Table 4 .",
"This Unicode Set covers simple signs, vase shapes, complex signs, complex signs with vase shapes, fractions and compound fractions.",
"This is a resource that opens, for the first time, the possibility to develop a Linear A digital corpus not consisting of a transliteration or alternative transcription.",
"Corpus data format Many scholars have faced the issues for data curation and considered various possibilities.",
"Among all the possible solutions, we chose to develop the Linear A Digital Corpus as a collection of TEI-EpiDoc XML documents.",
"In this section we explain why.",
"Why Digital?",
"Many epigraphic corpora have begun to be digitalized; there are many reasons to do so.",
"A digital corpus can include several representations of the inscriptions (Mahoney, 2007) : • pictures of the original document; • pictures of drawings or transcriptions made by hand simplifying the document; • diplomatic transcriptions; • edited texts; • translations; • commentaries.",
"Building a database is enough to get much richer features than the ones a paper corpus would provide.",
"The most visible feature of an epigraphic database is its utility as an Index Universalis (Gómez Pantoja and Álvarez, 2011); unlike hand-made indexes, there is no need to constrain the number of available search-keys.",
"Needless to say, the opportunity to have the data available also on the web is valuable.",
"Why Unicode?",
"Text processing must also take into account the writing systems represented in the corpus.",
"If the corpus consists of inscriptions written in the Latin alphabet, then the writing system of the inscriptions is the same as that of the Western European modern languages used for meta-data, translations, and commentaries.",
"In our case, unluckily, we have to deal with Linear A, so we need to find a way to represent our text.",
"Scholars objected to epigraphic databases on the ground of its poor graphic ability to represent non-Latin writing systems (García Barriocanal et al., 2011) .",
"This led to the use of non-standard fonts in some databases which probed to be a bad move, compromising overall compatibility and system upgrading.",
"This approach is appealing because if the corpus needs to be printed, sooner or later fonts will be a need in all cases.",
"The font-based solution assumes that all the software involved can recognize font-change markers.",
"Unluckily, some Database Management Systems (DMSs) do not allow changes of font within a text field and some export or interchange formats lose font information.",
"When the scripts of the corpus are all supported, which will be the case for any script still used by a living language, Unicode is a better approach.",
"Despite Minoan not being a living language, Linear A is finally part of the Unicode 7.0 Character Code Charts 8 but some sign groups conventionally interpreted as numbers have no Unicode representation.",
"Why XML?",
"Until not so long ago, markup systems have always involved special typographical symbols in the text-brackets, underdots, and so on.",
"Some epigraphers see XML as a natural transformation of what they have always done, with all the additional benefits that come from standardization within the community.",
"There is a growing consensus that XML is the best way to encode text.",
"Some corpora may also use the typographical marks of the Leiden system, which has the advan-<glyph xml:id=\"n5\"> <glyphName> Number 5 </glyphName> <mapping type=\"standardized\"> 5 </mapping> </glyph> tage of being entirely familiar to the epigraphers who create and maintain the corpus.",
"Unfortunately, the special brackets, underdots, and other typographical devices may not be supported by the character set of the computer system to be used.",
"A key incentive for using XML is the ability to exchange data with other projects.",
"It is convenient to be able to divide the information in many layers: cataloging, annotating, commenting and editing the inscriptions.",
"In some cases, merging different layers from different projects could be a need (for example when each of these projects is focused on a specific layer, for which provides the best quality), as a consequence the resulting data should be in compatible forms.",
"If the projects use the same Document Type Definition (DTD), in the same way, this is relatively easy.",
"While corpora that store their texts as wordprocessor files with Leiden markup can also share data, they must agree explicitly on the details of text layout, file formats, and character encodings.",
"With XML, it is possible to define either elements or entities for unsupported characters.",
"This feature is particularly interesting in our case, giving a solution for the numbers representation (Linear A numbers, except for fractions, have no Unicode representation).",
"Suppose you want to mark up the sign group , conventionally interpreted as the number 5, in the XML.",
"As specified in the TEI DTD, this could be expressed as <g ref=\"#n5\"/>, where the element g indicates a glyph, or a non-standard character and the attribute value points to the element glyph, which contains information about the specific glyph.",
"An example is given in Figure 1 .",
"Alternatively, the project might define an entity to represent this character.",
"Either way, the XML text notes that there is the Linear A number 5, and the later rendering of the text for display or printing can substitute the appropriate character in a known font, a picture of the character, or even a numeral from a different system.",
"Such approaches assume that tools are available for these conversions; some application, transformation, or stylesheet must have a way to know how to interpret the given element or entity.",
"The usage of XML provides two advantages: in first place, it makes possible the encoding of the characters that occur in the text (as shown above); in second place, it's really useful for encoding meta-information.",
"Why EpiDoc?",
"If a project decides to use XML, the most appropriate DTD (or schema) to be used needs to be chosen.",
"As in every other humanities discipline, the basic question is whether to use a general DTD, like the TEI, or to write a project-specific one.",
"Some projects need DTDs that are extremely specific to the types of inscriptions they are dealing with, instead other projects prefer to rely on existing, widely used DTDs.",
"Mahoney (2007) has deeply analyzed all the digitization issues, taking into account all the advantages and disadvantages of different approaches; her conclusion is that it's best to use EpiDoc 9 an XML encoding tool that could be also used to write structured documents compliant with the TEI standard 10 .",
"The EpiDoc DTD is the TEI, with a few epigraphically oriented customizations made using the standard TEI mechanisms.",
"Rather than writing a DTD for epigraphy from scratch, the Epi-Doc group uses the TEI because TEI has already addressed many of the taxonomic and semantic challenges faced by epigraphers, because the TEIusing community can provide a wide range of best-practice examples and guiding expertise, and because existing tooling built around TEI could easily lead to early and effective presentation and use of TEI-encoded epigraphic texts (Mahoney, 2007) .",
"The TEI and EpiDoc approaches have already been adopted by several epigraphic projects (Bodard, 2009 ), such as the Dêmos project (Furman University) and the corpus of Macedonian and Thracian inscriptions being compiled at KERA, the Research Center for Greek and Roman Antiquity at Athens (Mahoney, 2007) .",
"Also other scholars evaluate EpiDoc as a suitable choice.",
"Felle (2011) compares the EAGLE (Electronic Archive of Greek and Latin Epigraphy 11 ) project with the EpiDoc existing resources, viewing these resources as different but complementary.",
"Álvarez et al.",
"(2010) and Gómez Pantoja and Álvarez (2011) discuss the possibility of sharing Epigraphic Information as EpiDoc-based Linked Data and describe how they implemented a relational-to-linked data solution for the Hispania Epigraphica database.",
"Cayless (2003) evaluates EpiDoc as a relevant digital tool for Epigraphy allowing for a uniform representation of epigraphic metadata.",
"The EpiDoc guidelines are emerging as one standard for digital epigraphy with the TEI.",
"EpiDoc is not the only possible way to use the TEI for epigraphic texts but the tools, documentation, and examples 12 make it a good environment for new digitization projects as ours.",
"EpiDoc structure An EpiDoc document is structured as a standard TEI document with the teiHeader element including some initial Desc sections (fileDesc, encodingDesc, profileDesc, revisionDesc, etc) containing metadata, general information and descriptions (here we annotated place, period, kind of support and specific objects/fragments IDs).",
"An interesting use of encodingDesc is shown in Figure 1 above: the gliph element has to be defined inside its parent element charDecl and its grandparent element encodingDesc.",
"The teiHeader element is followed by the text element including the body element composed by a series of unnumbered <div>s, distinguished by their type attributes (we show an example of the Epidoc <div> element in Figure 2 ).",
"Typical divisions might include: • text itself (type=\"edition\"); • translation (type=\"translation\"); 11 http://www.eagle-eagle.it/ 12 http://wiki.tei-c.org/index.php/ Samples_of_TEI_texts • description (type=\"description\"; • commentary (type=\"commentary\"); • historical information(type=\"history\"); • bibliography (type=\"bibliography\").",
"<div lang=\"minoan\" n=\"text\" type=\"edition\" part=\"N\" sample=\"complete\" org=\"uniform\"> <head lang=\"eng\">Edition</head> <cb rend=\"front\" n=\"HM 1673\"/> <ab part=\"N\"> <lb n=\"1\"/> <w part=\"N\"> </w> <space dim=\"horizontal\" extent=\"1em\" unit=\"character\"/> <w part=\"N\"> </w> <lb n=\"2\"/> <w part=\"N\"> </w> <g ref=\"#n5\"/> <w part=\"N\"> </w> <lb n=\"3\"/> <w part=\"N\"> </w> <g ref=\"#n12\"/> <w part=\"N\"> </w> <lb n=\"4\"/> <w part=\"N\"> </w> <g ref=\"#n6\"/> <lb n=\"5\"/> <w part=\"N\"> </w> <lb n=\"6\"/> <g ref=\"#n4\"/> <w part=\"N\"> </w> <supplied reason=\"damage\"> </supplied> <gap extent=\"2em\" reason=\"lost\" unit=\"character\" dim=\"right\"/> </ab> </div> The EpiDoc DTD introduces a finite set of possible values for the type of a <div>, so that there is a clear distinction between sections covering different aspects, such as the commentary, the description or the archaeological history.",
"One advantage of structured markup is that editors can encode more information about how certain a particular feature is.",
"The date of an inscription, for example, can be encoded as a range of possible dates.",
"EpiDoc includes the TEI <certainty> element and the cert attribute to encourage editors to say whether or not they are completely confident of a given reading.",
"After some discussion, the EpiDoc community (Mahoney, 2007) decided that certainty should be expressed as a yes-or-no value: either the editor is certain of the reading, or not.",
"Gradual certainty is too complicated to manage and is best explained in the commentary.",
"Developing the Linear A Corpus The hope that computational approaches could help decipher Linear A, along with the evident lack of rich digital resources in this field, led us to develop this new resource.",
"In this section we describe which issues we faced and which solving strategies we used.",
"Data Collection Luckily the existence of Younger's website and GORILA volumes, together with the Raison-Pope Index, made possible a semi-automatic collection process, starting from syllabic transcriptions taken from Younger's website (with his permission), converting them in Unicode strings through Python scripts and acquiring all the metadata provided in Younger's transcriptions (location and support IDs, conservation place, periodization etc.).",
"Younger's resources on his website consist of two HTML pages, one containing inscriptions from Haghia Triada (that is the richest location in terms of documents found there) (Younger, 2000k) and the other containing documents from all the other locations (Younger, 2000l ).",
"Younger's transcriptions are well enriched with metadata.",
"The metadata convey the same information found in GORILA, including the Raison-Pope Index, plus some additional description of the support (this was not necessary in GORILA volumes, where the transcriptions are shown just next to the documents pictures) and the reference to the specific GORILA volume and pages.",
"Segmentation Issues When working on ancient writing systems, segmentation issues are expected to come up.",
"John G. Younger explains (Younger, 2000c ) that in Linear A separation is mainly indicated in two ways: first, by associating sign groups with numbers or logograms, thereby implying a separation; second, by placing a dot between two sign groups, thereby explicitly separating the sign groups or between a sign group and some other sign like a transaction sign or a logogram.",
"Younger also explains that in texts that employ a string of sign groups, dots are used to separate them and this practice is most notable on non-bureaucratic texts and especially in religious texts.",
"On his website, Younger also covers the hyphenization issue (Younger, 2000d) , explaining that in some cases we find a split across lines and the reason may involve separating prefixes from base words (the root of a sign group) or base words from their suffixes.",
"As Younger points out, this hypothesis would require evidence showing that affixes are involved.",
"The hyphenization issue is more complex to solve because a 'neutral' resource should avoid transcriptions implying a well known segmentation for Linear A sign groups.",
"In Younger's transcriptions, split sign groups are reunified in order to make it clearer when a known sign group is there.",
"Instead, our digital collection keeps the text as it is on the document, all the information about interpretations of such kind can be stored separately.",
"Obtaining Unicode transcriptions We managed to obtain Unicode encoded transcriptions by automatically converting Younger's phonetic transcriptions to GORILA transcriptions (manually checked against GORILA volumes) and then by automatically converting GORILA transcriptions to Unicode codes and printing them as Unicode characters (UTF-8 encoding).",
"In order to create the syllables-to-GORILA and the GORILA-to-Unicode dictionaries, we took into account Younger's conversion table mentioned in Subsection 2.4 and the official Unicode documentation (containing explicit Unicode-to-GORILA mapping information).",
"All these processing steps have been implemented through Python scripts.",
"XML annotation Once collected the whole corpus encoded in Unicode, we automatically added part of the XML annotation through a python script.",
"These documents have been later manually corrected and completed, checking against GORILA volumes.",
"A new Linear A font Before the Unicode 7.0 release, there was no way to visualize Unicode characters in the range 10600-1077F.",
"Even now, systems that are not updated may have trouble to visualize those characters.",
"Some implementations for Unicode support in certain contexts (for example for L A T E X's output) are not always up-to-date, so it is not obvious that the fonts for the most recent characters sets are available.",
"We decided to develop a new Linear A font, solving the main issue found in LA.ttf (wrong Unicode positions).",
"Starting from the official Unicode documentation, we created a set of symbols graphically similar to the official ones and aligned them to the right Unicode positions.",
"We decided to name the font John_Younger.ttf to show our appreciation for Younger's work.",
"He made the results of GORILA available to a wider public on digital media; this is the same goal we want to pursue by developing and distributing this font.",
"We released the font file at the following URL: http://openfontlibrary.",
"org/en/font/john-younger.",
"The Linear A Digital Corpus as cultural resource As stated by European Commission (2015) and UNESCO (2003) , the meaning of the notion of cultural heritage does not apply just to material objects and works of art, but also to 'intangible cultural heritage', as traditions and creative expressions.",
"In this perspective, linguistic corpora fit perfectly this definition; in fact, they contain information about tradition, knowledge and lifestyle of a certain culture.",
"Despite the fact that the Minoan language has not yet been deciphered, we know that the Linear A corpus provides interesting information concerning economy, commerce and religion.",
"As mentioned in Subsection 2.1, Schoep (2002) made a critical assessment of the Linear A tablets and their role in the administrative process, studying the physical supports.",
"Ruth Palmer (1995) made a deep study of commodities distributions (listing precise quantities and places) among Minoan centers, even without a full understanding of documents contents.",
"As Palmer points out, 'the ideograms for basic commodities, and the formats of the Linear A texts are similar enough to their Linear B counterparts to allow valid comparison of the types and amounts of commodities which appear in specific contexts'.",
"So, it's possible to have 'an idea of the focus of the economy' and of 'the scale and complexity of the transactions'.",
"From the linear A tablets, we can infer information about the resources management and administration system of Minoan centers.",
"Van den Kerkhof and Rem (2007) analyzed the Minoan libation formulas: religious inscriptions on cups, ladles and tables that were used in the offerings of oil and other powerful drinks at dawn.",
"The priestesses that carried out the Minoan libation ritual used all kinds of utensils, and they often inscribed their sacred formulas onto these objects.",
"Around thirty of these texts have survived (whole or in part) on libation tables, ladles and vases, written in various kinds of handwriting.",
"Transcripts of these religious inscriptions are available from Consani et al.",
"(1999) and from John G. Younger (2000m) on his website.",
"As noticed by Duhoux (1989) the Minoan libation formulas have a fixed structure with variable elements.",
"In fact, some studies (Davis, 2014) about Minoan syntax have been made by observing the sign groups order found in these regular formulas.",
"More importantly, the presence of olive-like ideograms could tell us that the Minoans used olive oil for libation (Van den Kerkhof and Rem, 2007) .",
"Beyond all these parts of the Minoan cultural heritage already available, a huge part is preserved there too: the Minoan language, with its hidden stories reflecting the life of a civilization.",
"We hope that our contribution can be useful to the community and that the Minoan, in its digital form, may finally be deciphered through computational approaches.",
"Future Work We are working on XSL style sheets in order to create suitable HTML pages.",
"All the data will be freely available and published at the following URL: http://ling.ied.edu.HK/ gregoire/lineara.",
"A further step will be developing a web interface to annotate, and dynamically enrich the corpus information."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"2.3",
"2.4",
"2.5",
"2.6",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"3.5",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"4.5",
"5",
"6"
],
"paper_header_content": [
"Introduction to Linear A and Minoan",
"Linear A available resources",
"Linear A documents",
"Godart and Olivier's Collection of Linear A Inscriptions",
"John G. Younger's website",
"GORILA symbols catalogue",
"Linear A Font",
"Unicode Linear A Characters Set",
"Corpus data format",
"Why Digital?",
"Why Unicode?",
"Why XML?",
"Why EpiDoc?",
"EpiDoc structure",
"Developing the Linear A Corpus",
"Data Collection",
"Segmentation Issues",
"Obtaining Unicode transcriptions",
"XML annotation",
"A new Linear A font",
"The Linear A Digital Corpus as cultural resource",
"Future Work"
]
} | GEM-SciDuet-train-29#paper-1038#slide-0 | Introduction | Well describe the Linear A/Minoan digital corpus and the approaches we applied to develop it
Why we should develop a Linear A Corpus and the reasons for which we chose XML-TEI EpiDoc
Available resources and developing process
The Linear A Corpus as Cultural Heritage
Petrolito, Winterstein, Perono Cacciafoco Linear A Corpus 30 July 2015 | Well describe the Linear A/Minoan digital corpus and the approaches we applied to develop it
Why we should develop a Linear A Corpus and the reasons for which we chose XML-TEI EpiDoc
Available resources and developing process
The Linear A Corpus as Cultural Heritage
Petrolito, Winterstein, Perono Cacciafoco Linear A Corpus 30 July 2015 | [] |
GEM-SciDuet-train-29#paper-1038#slide-1 | 1038 | Minoan linguistic resources: The Linear A Digital Corpus | This paper describes the Linear A/Minoan digital corpus and the approaches we applied to develop it. We aim to set up a suitable study resource for Linear A and Minoan. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168
],
"paper_content_text": [
"Firstly we start by introducing Linear A and Minoan in order to make it clear why we should develop a digital marked up corpus of the existing Linear A transcriptions.",
"Secondly we list and describe some of the existing resources about Linear A: Linear A documents (seals, statuettes, vessels etc.",
"), the traditional encoding systems (standard code numbers referring to distinct symbols), a Linear A font, and the newest (released on June 16th 2014) Unicode Standard Characters set for Linear A. Thirdly we explain our choice concerning the data format: why we decided to digitize the Linear A resources; why we decided to convert all the transcriptions in standard Unicode characters; why we decided to use an XML format; why we decided to implement the TEI-EpiDoc DTD.",
"Lastly we describe: the developing process (from the data collection to the issues we faced and the solving strategies); a new font we developed (synchronized with the Unicode Characters Set) in order to make the data readable even on systems that are not updated.",
"Finally, we discuss the corpus we developed in a Cultural Heritage preservation perspective and suggest some future works.",
"Introduction to Linear A and Minoan Linear A is the script used by the Minoan Civilization (Cotterell, 1980) from 2500 to 1450 BC.",
"Writing system Time span Cretan Hieroglyphic 2100 -1700 BC Linear A 2500 -1450 BC Linear B 1450 -1200 BC The Minoan Civilization arose on the island of Crete in the Aegean Sea during the Bronze Age.",
"Minoan ruins and artifacts have been found mainly in Crete but also in other Greek islands and in mainland Greece, in Bulgaria, in Turkey and in Israel.",
"Linear A is not used anymore and, even after decades of studies (it was discovered by Sir Arthur Evans around 1900 (Evans, 1909) ), it still remains undeciphered.",
"All the assumptions and hypotheses made about Linear A and Minoan (its underlying language) are mainly based on the comparison with the well known Linear B, the famous child system originated by Linear A.",
"In fact, Linear B was fully deciphered during the 1950s by Michael Ventris 1 and was found to encode an ancient Greek dialect used by the Mycenaean civilization.",
"Archaeologist Arthur Evans named the script 'Linear' because it consisted just of lines inscribed in clay (Robinson, 2009) There have been several attempts to decipher Linear A and the Minoan Language.",
"We can divide the underlying hypotheses in six groups: Greek-like language (Nagy, 1963) , distinct Indo-European branch (Owens, 1999) , Anatolian language close to Luwian (Palmer, 1958) , archaic form of Phoenician (Dietrich and Loretz, 2001) , Indo-Iranian (Faure, 1998) and Etruscan-like language (Giulio M. Facchetti and Negri, 2003) .",
"There is also an interesting attempt (Younger, 2000b) to decipher single words, specifically toponyms, by applying Linear B phonetic values to the symbols shared by both Linear A and Linear B and following the assumption that toponyms are much more likely to survive as loans in Mycenaean Greek (written in Linear B); we show an example of this approach in Table 2.",
"In the next sections we describe the available existing resources concerning Linear A and the Linear A Digital Corpus: why and how we developed it.",
"Linear A available resources Even if Linear A and Linear B were discovered more than one century ago, Linear A has not been deciphered yet.",
"Nevertheless, many scholars worked on collecting and organizing all the available data in order to study and to decipher the script and the language.",
"Probably due to the fact that only historical linguists, philologists and archaeologists attempted to collect and organize all the existing data, nowadays a rich and well organized digital corpus is still not available.",
"In this section we describe all the available Linear A resources, including both physical documents and digital data.",
"Table 3 : Indexed types of support (Younger, 2000e) .",
"Linear A documents Linear A was written on a variety of media, such as stone offering tables, gold and silver hair pins, and pots (inked and inscribed).",
"The clay documents consist of tablets, roundels, and sealings (one-hole, two-hole, and flat-based).",
"Roundels are related to a \"conveyance of a commodity, either within the central administration or between the central administration and an external party\" (Palmer, 1995; Schoep, 2002) .",
"The roundel is the record of this transaction that stays within the central administration as the commodity moves out of the transacting bureau (Hallager, 1996) .",
"Two-hole sealings probably dangled from commodities brought into the center; onehole sealings apparently dangled from papyrus/parchment documents; flat-based sealings (themselves never inscribed) were pressed against the twine that secured papyrus/parchment documents (Younger, 2000g; Schoep, 2002) as shown by photographs (Müller, 1999) , (Müller, 2002) of the imprints that survive on the underside of flat-based sealings.",
"There are 1,427 Linear A documents containing 7,362-7,396 signs, much less than the quantity of data we have for Linear B (more than 4,600 documents containing 57,398 signs) (Younger, 2000f) .",
"Godart and Olivier's Collection of Linear A Inscriptions There is a complete and organized collection of Linear A documents on a paper corpus, the GORILA Louis Godart and Jean-Pierre Olivier, Recueil des inscriptions en Linéaire A (Godart and Olivier, 1976) .",
"Godart and Olivier have indexed the documents by original location and type of support, following the Raison-Pope Index (Raison and Pope, 1971) .",
"For example, the document AP Za 1 is from AP = Apodoulou and the support type is Za = stone vessels as shown in Table 3 .",
"Younger (2000h) provides a map with all the Cretan sites and one with all the Greek non-Cretan sites (Younger, 2000i) .",
"Godart and Olivier also provide referential data about conservation places (mainly museums), and periodization (for example: EM II = Second Early Minoan).",
"Since 1976, this has been the main source of data and point of reference about Linear A documents and it has set up the basis for further studies.",
"Even recent corpora, such as the Corpus transnuméré du linéaire A (Raison and Pope, 1994) , always refer to GORILA precise volumes and pages describing each document.",
"John G. Younger's website Younger (2000j) has published a website that is the best digital resource available (there is another interesting project, never completed, on Yannis Deliyannis's website 2 ).",
"It collects most of the existing inscriptions (taking GORILA as main source of data and point of reference) transcribed as Linear B phonetic values (like the KU-NI-SU transcription above).",
"The transcriptions are kept up to date and a complete restructuring in June 2015 has been announced (Younger, 2000j) .",
"GORILA symbols catalogue Many transcription systems have been defined.",
"The first one has been proposed by Raison and Pope (1971) and uses a string composed by one or two characters (Lm, L or Lc depending on the symbol, respectively metric, phonetic or compound) followed by a number, for example: L2.",
"This system has been widely used by many scholars such as David Woodley Packard (president of the Packard Humanities Institute 3 ), Colin Renfrew and Richard Janko (Packard, 1974; Renfrew, 1977; Janko, 1982) .",
"The second one, used in the GORILA collection (Godart and Olivier, 1976 ) and on John G. Younger's website, consists of a string composed by one or two characters (AB if the symbol is shared by Linear A and Linear B, A if the symbol is only used in Linear A) followed by a number and eventually other alphabetical characters (due to addenda and corrigenda to earlier versions), for example: AB03.",
"Many scholars transcribe the symbols shared by Linear A and B with the assumed phonetical/syllabic transcription.",
"This syllabic transcription is based on the corresponding Linear B phonetic values.",
"Younger (2000a) provides a conversion table of Pope and Raison's transcription system, GO-RILA's transcription system and his own phonetic/syllabic transcription system.",
"Developing our corpus, we worked mainly on Younger's syllabic and GORILA transcriptions, because the Unicode Linear A encoding is broadly based on the GORILA catalogue, which is also the basic set of characters used in decipherment efforts 4 .",
"We provide an example of different transcriptions for the same symbol in Table 4 .",
"As can be noticed, the Unicode encoding is based on the GORILA transcription system.",
"Linear A Font The best Linear A Font available is LA.ttf, released by D.W. Borgdorff 5 in 2004.",
"In this font some arbitrary Unicode positions for Latin characters are mapped to Linear A symbols.",
"On one hand this allows the user to type Linear A symbols directly by pressing the keys on the keyboard; on the other hand, only transliterations can be produced.",
"The text eventually typed internally will be a series of Latin characters.",
"It should be remarked that this font would not be useful to make readable a Linear A corpus that is non-translittered and encoded in Unicode.",
"Unicode Linear A Characters Set On June 16th 2014, Version 7.0 of Unicode standard was released 6 , adding 2,834 new characters and including, finally, the Linear A character set.",
"Linear A block has been set in the range 10600-1077F and the order mainly follows GORILA's one 7 , as seen in Table 4 .",
"This Unicode Set covers simple signs, vase shapes, complex signs, complex signs with vase shapes, fractions and compound fractions.",
"This is a resource that opens, for the first time, the possibility to develop a Linear A digital corpus not consisting of a transliteration or alternative transcription.",
"Corpus data format Many scholars have faced the issues for data curation and considered various possibilities.",
"Among all the possible solutions, we chose to develop the Linear A Digital Corpus as a collection of TEI-EpiDoc XML documents.",
"In this section we explain why.",
"Why Digital?",
"Many epigraphic corpora have begun to be digitalized; there are many reasons to do so.",
"A digital corpus can include several representations of the inscriptions (Mahoney, 2007) : • pictures of the original document; • pictures of drawings or transcriptions made by hand simplifying the document; • diplomatic transcriptions; • edited texts; • translations; • commentaries.",
"Building a database is enough to get much richer features than the ones a paper corpus would provide.",
"The most visible feature of an epigraphic database is its utility as an Index Universalis (Gómez Pantoja and Álvarez, 2011); unlike hand-made indexes, there is no need to constrain the number of available search-keys.",
"Needless to say, the opportunity to have the data available also on the web is valuable.",
"Why Unicode?",
"Text processing must also take into account the writing systems represented in the corpus.",
"If the corpus consists of inscriptions written in the Latin alphabet, then the writing system of the inscriptions is the same as that of the Western European modern languages used for meta-data, translations, and commentaries.",
"In our case, unluckily, we have to deal with Linear A, so we need to find a way to represent our text.",
"Scholars objected to epigraphic databases on the ground of its poor graphic ability to represent non-Latin writing systems (García Barriocanal et al., 2011) .",
"This led to the use of non-standard fonts in some databases which probed to be a bad move, compromising overall compatibility and system upgrading.",
"This approach is appealing because if the corpus needs to be printed, sooner or later fonts will be a need in all cases.",
"The font-based solution assumes that all the software involved can recognize font-change markers.",
"Unluckily, some Database Management Systems (DMSs) do not allow changes of font within a text field and some export or interchange formats lose font information.",
"When the scripts of the corpus are all supported, which will be the case for any script still used by a living language, Unicode is a better approach.",
"Despite Minoan not being a living language, Linear A is finally part of the Unicode 7.0 Character Code Charts 8 but some sign groups conventionally interpreted as numbers have no Unicode representation.",
"Why XML?",
"Until not so long ago, markup systems have always involved special typographical symbols in the text-brackets, underdots, and so on.",
"Some epigraphers see XML as a natural transformation of what they have always done, with all the additional benefits that come from standardization within the community.",
"There is a growing consensus that XML is the best way to encode text.",
"Some corpora may also use the typographical marks of the Leiden system, which has the advan-<glyph xml:id=\"n5\"> <glyphName> Number 5 </glyphName> <mapping type=\"standardized\"> 5 </mapping> </glyph> tage of being entirely familiar to the epigraphers who create and maintain the corpus.",
"Unfortunately, the special brackets, underdots, and other typographical devices may not be supported by the character set of the computer system to be used.",
"A key incentive for using XML is the ability to exchange data with other projects.",
"It is convenient to be able to divide the information in many layers: cataloging, annotating, commenting and editing the inscriptions.",
"In some cases, merging different layers from different projects could be a need (for example when each of these projects is focused on a specific layer, for which provides the best quality), as a consequence the resulting data should be in compatible forms.",
"If the projects use the same Document Type Definition (DTD), in the same way, this is relatively easy.",
"While corpora that store their texts as wordprocessor files with Leiden markup can also share data, they must agree explicitly on the details of text layout, file formats, and character encodings.",
"With XML, it is possible to define either elements or entities for unsupported characters.",
"This feature is particularly interesting in our case, giving a solution for the numbers representation (Linear A numbers, except for fractions, have no Unicode representation).",
"Suppose you want to mark up the sign group , conventionally interpreted as the number 5, in the XML.",
"As specified in the TEI DTD, this could be expressed as <g ref=\"#n5\"/>, where the element g indicates a glyph, or a non-standard character and the attribute value points to the element glyph, which contains information about the specific glyph.",
"An example is given in Figure 1 .",
"Alternatively, the project might define an entity to represent this character.",
"Either way, the XML text notes that there is the Linear A number 5, and the later rendering of the text for display or printing can substitute the appropriate character in a known font, a picture of the character, or even a numeral from a different system.",
"Such approaches assume that tools are available for these conversions; some application, transformation, or stylesheet must have a way to know how to interpret the given element or entity.",
"The usage of XML provides two advantages: in first place, it makes possible the encoding of the characters that occur in the text (as shown above); in second place, it's really useful for encoding meta-information.",
"Why EpiDoc?",
"If a project decides to use XML, the most appropriate DTD (or schema) to be used needs to be chosen.",
"As in every other humanities discipline, the basic question is whether to use a general DTD, like the TEI, or to write a project-specific one.",
"Some projects need DTDs that are extremely specific to the types of inscriptions they are dealing with, instead other projects prefer to rely on existing, widely used DTDs.",
"Mahoney (2007) has deeply analyzed all the digitization issues, taking into account all the advantages and disadvantages of different approaches; her conclusion is that it's best to use EpiDoc 9 an XML encoding tool that could be also used to write structured documents compliant with the TEI standard 10 .",
"The EpiDoc DTD is the TEI, with a few epigraphically oriented customizations made using the standard TEI mechanisms.",
"Rather than writing a DTD for epigraphy from scratch, the Epi-Doc group uses the TEI because TEI has already addressed many of the taxonomic and semantic challenges faced by epigraphers, because the TEIusing community can provide a wide range of best-practice examples and guiding expertise, and because existing tooling built around TEI could easily lead to early and effective presentation and use of TEI-encoded epigraphic texts (Mahoney, 2007) .",
"The TEI and EpiDoc approaches have already been adopted by several epigraphic projects (Bodard, 2009 ), such as the Dêmos project (Furman University) and the corpus of Macedonian and Thracian inscriptions being compiled at KERA, the Research Center for Greek and Roman Antiquity at Athens (Mahoney, 2007) .",
"Also other scholars evaluate EpiDoc as a suitable choice.",
"Felle (2011) compares the EAGLE (Electronic Archive of Greek and Latin Epigraphy 11 ) project with the EpiDoc existing resources, viewing these resources as different but complementary.",
"Álvarez et al.",
"(2010) and Gómez Pantoja and Álvarez (2011) discuss the possibility of sharing Epigraphic Information as EpiDoc-based Linked Data and describe how they implemented a relational-to-linked data solution for the Hispania Epigraphica database.",
"Cayless (2003) evaluates EpiDoc as a relevant digital tool for Epigraphy allowing for a uniform representation of epigraphic metadata.",
"The EpiDoc guidelines are emerging as one standard for digital epigraphy with the TEI.",
"EpiDoc is not the only possible way to use the TEI for epigraphic texts but the tools, documentation, and examples 12 make it a good environment for new digitization projects as ours.",
"EpiDoc structure An EpiDoc document is structured as a standard TEI document with the teiHeader element including some initial Desc sections (fileDesc, encodingDesc, profileDesc, revisionDesc, etc) containing metadata, general information and descriptions (here we annotated place, period, kind of support and specific objects/fragments IDs).",
"An interesting use of encodingDesc is shown in Figure 1 above: the gliph element has to be defined inside its parent element charDecl and its grandparent element encodingDesc.",
"The teiHeader element is followed by the text element including the body element composed by a series of unnumbered <div>s, distinguished by their type attributes (we show an example of the Epidoc <div> element in Figure 2 ).",
"Typical divisions might include: • text itself (type=\"edition\"); • translation (type=\"translation\"); 11 http://www.eagle-eagle.it/ 12 http://wiki.tei-c.org/index.php/ Samples_of_TEI_texts • description (type=\"description\"; • commentary (type=\"commentary\"); • historical information(type=\"history\"); • bibliography (type=\"bibliography\").",
"<div lang=\"minoan\" n=\"text\" type=\"edition\" part=\"N\" sample=\"complete\" org=\"uniform\"> <head lang=\"eng\">Edition</head> <cb rend=\"front\" n=\"HM 1673\"/> <ab part=\"N\"> <lb n=\"1\"/> <w part=\"N\"> </w> <space dim=\"horizontal\" extent=\"1em\" unit=\"character\"/> <w part=\"N\"> </w> <lb n=\"2\"/> <w part=\"N\"> </w> <g ref=\"#n5\"/> <w part=\"N\"> </w> <lb n=\"3\"/> <w part=\"N\"> </w> <g ref=\"#n12\"/> <w part=\"N\"> </w> <lb n=\"4\"/> <w part=\"N\"> </w> <g ref=\"#n6\"/> <lb n=\"5\"/> <w part=\"N\"> </w> <lb n=\"6\"/> <g ref=\"#n4\"/> <w part=\"N\"> </w> <supplied reason=\"damage\"> </supplied> <gap extent=\"2em\" reason=\"lost\" unit=\"character\" dim=\"right\"/> </ab> </div> The EpiDoc DTD introduces a finite set of possible values for the type of a <div>, so that there is a clear distinction between sections covering different aspects, such as the commentary, the description or the archaeological history.",
"One advantage of structured markup is that editors can encode more information about how certain a particular feature is.",
"The date of an inscription, for example, can be encoded as a range of possible dates.",
"EpiDoc includes the TEI <certainty> element and the cert attribute to encourage editors to say whether or not they are completely confident of a given reading.",
"After some discussion, the EpiDoc community (Mahoney, 2007) decided that certainty should be expressed as a yes-or-no value: either the editor is certain of the reading, or not.",
"Gradual certainty is too complicated to manage and is best explained in the commentary.",
"Developing the Linear A Corpus The hope that computational approaches could help decipher Linear A, along with the evident lack of rich digital resources in this field, led us to develop this new resource.",
"In this section we describe which issues we faced and which solving strategies we used.",
"Data Collection Luckily the existence of Younger's website and GORILA volumes, together with the Raison-Pope Index, made possible a semi-automatic collection process, starting from syllabic transcriptions taken from Younger's website (with his permission), converting them in Unicode strings through Python scripts and acquiring all the metadata provided in Younger's transcriptions (location and support IDs, conservation place, periodization etc.).",
"Younger's resources on his website consist of two HTML pages, one containing inscriptions from Haghia Triada (that is the richest location in terms of documents found there) (Younger, 2000k) and the other containing documents from all the other locations (Younger, 2000l ).",
"Younger's transcriptions are well enriched with metadata.",
"The metadata convey the same information found in GORILA, including the Raison-Pope Index, plus some additional description of the support (this was not necessary in GORILA volumes, where the transcriptions are shown just next to the documents pictures) and the reference to the specific GORILA volume and pages.",
"Segmentation Issues When working on ancient writing systems, segmentation issues are expected to come up.",
"John G. Younger explains (Younger, 2000c ) that in Linear A separation is mainly indicated in two ways: first, by associating sign groups with numbers or logograms, thereby implying a separation; second, by placing a dot between two sign groups, thereby explicitly separating the sign groups or between a sign group and some other sign like a transaction sign or a logogram.",
"Younger also explains that in texts that employ a string of sign groups, dots are used to separate them and this practice is most notable on non-bureaucratic texts and especially in religious texts.",
"On his website, Younger also covers the hyphenization issue (Younger, 2000d) , explaining that in some cases we find a split across lines and the reason may involve separating prefixes from base words (the root of a sign group) or base words from their suffixes.",
"As Younger points out, this hypothesis would require evidence showing that affixes are involved.",
"The hyphenization issue is more complex to solve because a 'neutral' resource should avoid transcriptions implying a well known segmentation for Linear A sign groups.",
"In Younger's transcriptions, split sign groups are reunified in order to make it clearer when a known sign group is there.",
"Instead, our digital collection keeps the text as it is on the document, all the information about interpretations of such kind can be stored separately.",
"Obtaining Unicode transcriptions We managed to obtain Unicode encoded transcriptions by automatically converting Younger's phonetic transcriptions to GORILA transcriptions (manually checked against GORILA volumes) and then by automatically converting GORILA transcriptions to Unicode codes and printing them as Unicode characters (UTF-8 encoding).",
"In order to create the syllables-to-GORILA and the GORILA-to-Unicode dictionaries, we took into account Younger's conversion table mentioned in Subsection 2.4 and the official Unicode documentation (containing explicit Unicode-to-GORILA mapping information).",
"All these processing steps have been implemented through Python scripts.",
"XML annotation Once collected the whole corpus encoded in Unicode, we automatically added part of the XML annotation through a python script.",
"These documents have been later manually corrected and completed, checking against GORILA volumes.",
"A new Linear A font Before the Unicode 7.0 release, there was no way to visualize Unicode characters in the range 10600-1077F.",
"Even now, systems that are not updated may have trouble to visualize those characters.",
"Some implementations for Unicode support in certain contexts (for example for L A T E X's output) are not always up-to-date, so it is not obvious that the fonts for the most recent characters sets are available.",
"We decided to develop a new Linear A font, solving the main issue found in LA.ttf (wrong Unicode positions).",
"Starting from the official Unicode documentation, we created a set of symbols graphically similar to the official ones and aligned them to the right Unicode positions.",
"We decided to name the font John_Younger.ttf to show our appreciation for Younger's work.",
"He made the results of GORILA available to a wider public on digital media; this is the same goal we want to pursue by developing and distributing this font.",
"We released the font file at the following URL: http://openfontlibrary.",
"org/en/font/john-younger.",
"The Linear A Digital Corpus as cultural resource As stated by European Commission (2015) and UNESCO (2003) , the meaning of the notion of cultural heritage does not apply just to material objects and works of art, but also to 'intangible cultural heritage', as traditions and creative expressions.",
"In this perspective, linguistic corpora fit perfectly this definition; in fact, they contain information about tradition, knowledge and lifestyle of a certain culture.",
"Despite the fact that the Minoan language has not yet been deciphered, we know that the Linear A corpus provides interesting information concerning economy, commerce and religion.",
"As mentioned in Subsection 2.1, Schoep (2002) made a critical assessment of the Linear A tablets and their role in the administrative process, studying the physical supports.",
"Ruth Palmer (1995) made a deep study of commodities distributions (listing precise quantities and places) among Minoan centers, even without a full understanding of documents contents.",
"As Palmer points out, 'the ideograms for basic commodities, and the formats of the Linear A texts are similar enough to their Linear B counterparts to allow valid comparison of the types and amounts of commodities which appear in specific contexts'.",
"So, it's possible to have 'an idea of the focus of the economy' and of 'the scale and complexity of the transactions'.",
"From the linear A tablets, we can infer information about the resources management and administration system of Minoan centers.",
"Van den Kerkhof and Rem (2007) analyzed the Minoan libation formulas: religious inscriptions on cups, ladles and tables that were used in the offerings of oil and other powerful drinks at dawn.",
"The priestesses that carried out the Minoan libation ritual used all kinds of utensils, and they often inscribed their sacred formulas onto these objects.",
"Around thirty of these texts have survived (whole or in part) on libation tables, ladles and vases, written in various kinds of handwriting.",
"Transcripts of these religious inscriptions are available from Consani et al.",
"(1999) and from John G. Younger (2000m) on his website.",
"As noticed by Duhoux (1989) the Minoan libation formulas have a fixed structure with variable elements.",
"In fact, some studies (Davis, 2014) about Minoan syntax have been made by observing the sign groups order found in these regular formulas.",
"More importantly, the presence of olive-like ideograms could tell us that the Minoans used olive oil for libation (Van den Kerkhof and Rem, 2007) .",
"Beyond all these parts of the Minoan cultural heritage already available, a huge part is preserved there too: the Minoan language, with its hidden stories reflecting the life of a civilization.",
"We hope that our contribution can be useful to the community and that the Minoan, in its digital form, may finally be deciphered through computational approaches.",
"Future Work We are working on XSL style sheets in order to create suitable HTML pages.",
"All the data will be freely available and published at the following URL: http://ling.ied.edu.HK/ gregoire/lineara.",
"A further step will be developing a web interface to annotate, and dynamically enrich the corpus information."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"2.3",
"2.4",
"2.5",
"2.6",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"3.5",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"4.5",
"5",
"6"
],
"paper_header_content": [
"Introduction to Linear A and Minoan",
"Linear A available resources",
"Linear A documents",
"Godart and Olivier's Collection of Linear A Inscriptions",
"John G. Younger's website",
"GORILA symbols catalogue",
"Linear A Font",
"Unicode Linear A Characters Set",
"Corpus data format",
"Why Digital?",
"Why Unicode?",
"Why XML?",
"Why EpiDoc?",
"EpiDoc structure",
"Developing the Linear A Corpus",
"Data Collection",
"Segmentation Issues",
"Obtaining Unicode transcriptions",
"XML annotation",
"A new Linear A font",
"The Linear A Digital Corpus as cultural resource",
"Future Work"
]
} | GEM-SciDuet-train-29#paper-1038#slide-1 | Linear A and Minoan | The Linear A script was used by the Minoan Civilization (Crete, 2500
1450 BC) and it still remains undeciphered
Many symbols are shared by both Linear A and Linear B and are assumed to have phonetic values. The others are probably logograms:
Linear A/B Linear A symbols value syllable logogram
Linear B has been deciphered (during the 50s) and found to be used to write an Ancient Greek dialect, so many scholars are trying to decipher Linear A too
Petrolito, Winterstein, Perono Cacciafoco Linear A Corpus 30 July 2015 | The Linear A script was used by the Minoan Civilization (Crete, 2500
1450 BC) and it still remains undeciphered
Many symbols are shared by both Linear A and Linear B and are assumed to have phonetic values. The others are probably logograms:
Linear A/B Linear A symbols value syllable logogram
Linear B has been deciphered (during the 50s) and found to be used to write an Ancient Greek dialect, so many scholars are trying to decipher Linear A too
Petrolito, Winterstein, Perono Cacciafoco Linear A Corpus 30 July 2015 | [] |
GEM-SciDuet-train-29#paper-1038#slide-2 | 1038 | Minoan linguistic resources: The Linear A Digital Corpus | This paper describes the Linear A/Minoan digital corpus and the approaches we applied to develop it. We aim to set up a suitable study resource for Linear A and Minoan. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168
],
"paper_content_text": [
"Firstly we start by introducing Linear A and Minoan in order to make it clear why we should develop a digital marked up corpus of the existing Linear A transcriptions.",
"Secondly we list and describe some of the existing resources about Linear A: Linear A documents (seals, statuettes, vessels etc.",
"), the traditional encoding systems (standard code numbers referring to distinct symbols), a Linear A font, and the newest (released on June 16th 2014) Unicode Standard Characters set for Linear A. Thirdly we explain our choice concerning the data format: why we decided to digitize the Linear A resources; why we decided to convert all the transcriptions in standard Unicode characters; why we decided to use an XML format; why we decided to implement the TEI-EpiDoc DTD.",
"Lastly we describe: the developing process (from the data collection to the issues we faced and the solving strategies); a new font we developed (synchronized with the Unicode Characters Set) in order to make the data readable even on systems that are not updated.",
"Finally, we discuss the corpus we developed in a Cultural Heritage preservation perspective and suggest some future works.",
"Introduction to Linear A and Minoan Linear A is the script used by the Minoan Civilization (Cotterell, 1980) from 2500 to 1450 BC.",
"Writing system Time span Cretan Hieroglyphic 2100 -1700 BC Linear A 2500 -1450 BC Linear B 1450 -1200 BC The Minoan Civilization arose on the island of Crete in the Aegean Sea during the Bronze Age.",
"Minoan ruins and artifacts have been found mainly in Crete but also in other Greek islands and in mainland Greece, in Bulgaria, in Turkey and in Israel.",
"Linear A is not used anymore and, even after decades of studies (it was discovered by Sir Arthur Evans around 1900 (Evans, 1909) ), it still remains undeciphered.",
"All the assumptions and hypotheses made about Linear A and Minoan (its underlying language) are mainly based on the comparison with the well known Linear B, the famous child system originated by Linear A.",
"In fact, Linear B was fully deciphered during the 1950s by Michael Ventris 1 and was found to encode an ancient Greek dialect used by the Mycenaean civilization.",
"Archaeologist Arthur Evans named the script 'Linear' because it consisted just of lines inscribed in clay (Robinson, 2009) There have been several attempts to decipher Linear A and the Minoan Language.",
"We can divide the underlying hypotheses in six groups: Greek-like language (Nagy, 1963) , distinct Indo-European branch (Owens, 1999) , Anatolian language close to Luwian (Palmer, 1958) , archaic form of Phoenician (Dietrich and Loretz, 2001) , Indo-Iranian (Faure, 1998) and Etruscan-like language (Giulio M. Facchetti and Negri, 2003) .",
"There is also an interesting attempt (Younger, 2000b) to decipher single words, specifically toponyms, by applying Linear B phonetic values to the symbols shared by both Linear A and Linear B and following the assumption that toponyms are much more likely to survive as loans in Mycenaean Greek (written in Linear B); we show an example of this approach in Table 2.",
"In the next sections we describe the available existing resources concerning Linear A and the Linear A Digital Corpus: why and how we developed it.",
"Linear A available resources Even if Linear A and Linear B were discovered more than one century ago, Linear A has not been deciphered yet.",
"Nevertheless, many scholars worked on collecting and organizing all the available data in order to study and to decipher the script and the language.",
"Probably due to the fact that only historical linguists, philologists and archaeologists attempted to collect and organize all the existing data, nowadays a rich and well organized digital corpus is still not available.",
"In this section we describe all the available Linear A resources, including both physical documents and digital data.",
"Table 3 : Indexed types of support (Younger, 2000e) .",
"Linear A documents Linear A was written on a variety of media, such as stone offering tables, gold and silver hair pins, and pots (inked and inscribed).",
"The clay documents consist of tablets, roundels, and sealings (one-hole, two-hole, and flat-based).",
"Roundels are related to a \"conveyance of a commodity, either within the central administration or between the central administration and an external party\" (Palmer, 1995; Schoep, 2002) .",
"The roundel is the record of this transaction that stays within the central administration as the commodity moves out of the transacting bureau (Hallager, 1996) .",
"Two-hole sealings probably dangled from commodities brought into the center; onehole sealings apparently dangled from papyrus/parchment documents; flat-based sealings (themselves never inscribed) were pressed against the twine that secured papyrus/parchment documents (Younger, 2000g; Schoep, 2002) as shown by photographs (Müller, 1999) , (Müller, 2002) of the imprints that survive on the underside of flat-based sealings.",
"There are 1,427 Linear A documents containing 7,362-7,396 signs, much less than the quantity of data we have for Linear B (more than 4,600 documents containing 57,398 signs) (Younger, 2000f) .",
"Godart and Olivier's Collection of Linear A Inscriptions There is a complete and organized collection of Linear A documents on a paper corpus, the GORILA Louis Godart and Jean-Pierre Olivier, Recueil des inscriptions en Linéaire A (Godart and Olivier, 1976) .",
"Godart and Olivier have indexed the documents by original location and type of support, following the Raison-Pope Index (Raison and Pope, 1971) .",
"For example, the document AP Za 1 is from AP = Apodoulou and the support type is Za = stone vessels as shown in Table 3 .",
"Younger (2000h) provides a map with all the Cretan sites and one with all the Greek non-Cretan sites (Younger, 2000i) .",
"Godart and Olivier also provide referential data about conservation places (mainly museums), and periodization (for example: EM II = Second Early Minoan).",
"Since 1976, this has been the main source of data and point of reference about Linear A documents and it has set up the basis for further studies.",
"Even recent corpora, such as the Corpus transnuméré du linéaire A (Raison and Pope, 1994) , always refer to GORILA precise volumes and pages describing each document.",
"John G. Younger's website Younger (2000j) has published a website that is the best digital resource available (there is another interesting project, never completed, on Yannis Deliyannis's website 2 ).",
"It collects most of the existing inscriptions (taking GORILA as main source of data and point of reference) transcribed as Linear B phonetic values (like the KU-NI-SU transcription above).",
"The transcriptions are kept up to date and a complete restructuring in June 2015 has been announced (Younger, 2000j) .",
"GORILA symbols catalogue Many transcription systems have been defined.",
"The first one has been proposed by Raison and Pope (1971) and uses a string composed by one or two characters (Lm, L or Lc depending on the symbol, respectively metric, phonetic or compound) followed by a number, for example: L2.",
"This system has been widely used by many scholars such as David Woodley Packard (president of the Packard Humanities Institute 3 ), Colin Renfrew and Richard Janko (Packard, 1974; Renfrew, 1977; Janko, 1982) .",
"The second one, used in the GORILA collection (Godart and Olivier, 1976 ) and on John G. Younger's website, consists of a string composed by one or two characters (AB if the symbol is shared by Linear A and Linear B, A if the symbol is only used in Linear A) followed by a number and eventually other alphabetical characters (due to addenda and corrigenda to earlier versions), for example: AB03.",
"Many scholars transcribe the symbols shared by Linear A and B with the assumed phonetical/syllabic transcription.",
"This syllabic transcription is based on the corresponding Linear B phonetic values.",
"Younger (2000a) provides a conversion table of Pope and Raison's transcription system, GO-RILA's transcription system and his own phonetic/syllabic transcription system.",
"Developing our corpus, we worked mainly on Younger's syllabic and GORILA transcriptions, because the Unicode Linear A encoding is broadly based on the GORILA catalogue, which is also the basic set of characters used in decipherment efforts 4 .",
"We provide an example of different transcriptions for the same symbol in Table 4 .",
"As can be noticed, the Unicode encoding is based on the GORILA transcription system.",
"Linear A Font The best Linear A Font available is LA.ttf, released by D.W. Borgdorff 5 in 2004.",
"In this font some arbitrary Unicode positions for Latin characters are mapped to Linear A symbols.",
"On one hand this allows the user to type Linear A symbols directly by pressing the keys on the keyboard; on the other hand, only transliterations can be produced.",
"The text eventually typed internally will be a series of Latin characters.",
"It should be remarked that this font would not be useful to make readable a Linear A corpus that is non-translittered and encoded in Unicode.",
"Unicode Linear A Characters Set On June 16th 2014, Version 7.0 of Unicode standard was released 6 , adding 2,834 new characters and including, finally, the Linear A character set.",
"Linear A block has been set in the range 10600-1077F and the order mainly follows GORILA's one 7 , as seen in Table 4 .",
"This Unicode Set covers simple signs, vase shapes, complex signs, complex signs with vase shapes, fractions and compound fractions.",
"This is a resource that opens, for the first time, the possibility to develop a Linear A digital corpus not consisting of a transliteration or alternative transcription.",
"Corpus data format Many scholars have faced the issues for data curation and considered various possibilities.",
"Among all the possible solutions, we chose to develop the Linear A Digital Corpus as a collection of TEI-EpiDoc XML documents.",
"In this section we explain why.",
"Why Digital?",
"Many epigraphic corpora have begun to be digitalized; there are many reasons to do so.",
"A digital corpus can include several representations of the inscriptions (Mahoney, 2007) : • pictures of the original document; • pictures of drawings or transcriptions made by hand simplifying the document; • diplomatic transcriptions; • edited texts; • translations; • commentaries.",
"Building a database is enough to get much richer features than the ones a paper corpus would provide.",
"The most visible feature of an epigraphic database is its utility as an Index Universalis (Gómez Pantoja and Álvarez, 2011); unlike hand-made indexes, there is no need to constrain the number of available search-keys.",
"Needless to say, the opportunity to have the data available also on the web is valuable.",
"Why Unicode?",
"Text processing must also take into account the writing systems represented in the corpus.",
"If the corpus consists of inscriptions written in the Latin alphabet, then the writing system of the inscriptions is the same as that of the Western European modern languages used for meta-data, translations, and commentaries.",
"In our case, unluckily, we have to deal with Linear A, so we need to find a way to represent our text.",
"Scholars objected to epigraphic databases on the ground of its poor graphic ability to represent non-Latin writing systems (García Barriocanal et al., 2011) .",
"This led to the use of non-standard fonts in some databases which probed to be a bad move, compromising overall compatibility and system upgrading.",
"This approach is appealing because if the corpus needs to be printed, sooner or later fonts will be a need in all cases.",
"The font-based solution assumes that all the software involved can recognize font-change markers.",
"Unluckily, some Database Management Systems (DMSs) do not allow changes of font within a text field and some export or interchange formats lose font information.",
"When the scripts of the corpus are all supported, which will be the case for any script still used by a living language, Unicode is a better approach.",
"Despite Minoan not being a living language, Linear A is finally part of the Unicode 7.0 Character Code Charts 8 but some sign groups conventionally interpreted as numbers have no Unicode representation.",
"Why XML?",
"Until not so long ago, markup systems have always involved special typographical symbols in the text-brackets, underdots, and so on.",
"Some epigraphers see XML as a natural transformation of what they have always done, with all the additional benefits that come from standardization within the community.",
"There is a growing consensus that XML is the best way to encode text.",
"Some corpora may also use the typographical marks of the Leiden system, which has the advan-<glyph xml:id=\"n5\"> <glyphName> Number 5 </glyphName> <mapping type=\"standardized\"> 5 </mapping> </glyph> tage of being entirely familiar to the epigraphers who create and maintain the corpus.",
"Unfortunately, the special brackets, underdots, and other typographical devices may not be supported by the character set of the computer system to be used.",
"A key incentive for using XML is the ability to exchange data with other projects.",
"It is convenient to be able to divide the information in many layers: cataloging, annotating, commenting and editing the inscriptions.",
"In some cases, merging different layers from different projects could be a need (for example when each of these projects is focused on a specific layer, for which provides the best quality), as a consequence the resulting data should be in compatible forms.",
"If the projects use the same Document Type Definition (DTD), in the same way, this is relatively easy.",
"While corpora that store their texts as wordprocessor files with Leiden markup can also share data, they must agree explicitly on the details of text layout, file formats, and character encodings.",
"With XML, it is possible to define either elements or entities for unsupported characters.",
"This feature is particularly interesting in our case, giving a solution for the numbers representation (Linear A numbers, except for fractions, have no Unicode representation).",
"Suppose you want to mark up the sign group , conventionally interpreted as the number 5, in the XML.",
"As specified in the TEI DTD, this could be expressed as <g ref=\"#n5\"/>, where the element g indicates a glyph, or a non-standard character and the attribute value points to the element glyph, which contains information about the specific glyph.",
"An example is given in Figure 1 .",
"Alternatively, the project might define an entity to represent this character.",
"Either way, the XML text notes that there is the Linear A number 5, and the later rendering of the text for display or printing can substitute the appropriate character in a known font, a picture of the character, or even a numeral from a different system.",
"Such approaches assume that tools are available for these conversions; some application, transformation, or stylesheet must have a way to know how to interpret the given element or entity.",
"The usage of XML provides two advantages: in first place, it makes possible the encoding of the characters that occur in the text (as shown above); in second place, it's really useful for encoding meta-information.",
"Why EpiDoc?",
"If a project decides to use XML, the most appropriate DTD (or schema) to be used needs to be chosen.",
"As in every other humanities discipline, the basic question is whether to use a general DTD, like the TEI, or to write a project-specific one.",
"Some projects need DTDs that are extremely specific to the types of inscriptions they are dealing with, instead other projects prefer to rely on existing, widely used DTDs.",
"Mahoney (2007) has deeply analyzed all the digitization issues, taking into account all the advantages and disadvantages of different approaches; her conclusion is that it's best to use EpiDoc 9 an XML encoding tool that could be also used to write structured documents compliant with the TEI standard 10 .",
"The EpiDoc DTD is the TEI, with a few epigraphically oriented customizations made using the standard TEI mechanisms.",
"Rather than writing a DTD for epigraphy from scratch, the Epi-Doc group uses the TEI because TEI has already addressed many of the taxonomic and semantic challenges faced by epigraphers, because the TEIusing community can provide a wide range of best-practice examples and guiding expertise, and because existing tooling built around TEI could easily lead to early and effective presentation and use of TEI-encoded epigraphic texts (Mahoney, 2007) .",
"The TEI and EpiDoc approaches have already been adopted by several epigraphic projects (Bodard, 2009 ), such as the Dêmos project (Furman University) and the corpus of Macedonian and Thracian inscriptions being compiled at KERA, the Research Center for Greek and Roman Antiquity at Athens (Mahoney, 2007) .",
"Also other scholars evaluate EpiDoc as a suitable choice.",
"Felle (2011) compares the EAGLE (Electronic Archive of Greek and Latin Epigraphy 11 ) project with the EpiDoc existing resources, viewing these resources as different but complementary.",
"Álvarez et al.",
"(2010) and Gómez Pantoja and Álvarez (2011) discuss the possibility of sharing Epigraphic Information as EpiDoc-based Linked Data and describe how they implemented a relational-to-linked data solution for the Hispania Epigraphica database.",
"Cayless (2003) evaluates EpiDoc as a relevant digital tool for Epigraphy allowing for a uniform representation of epigraphic metadata.",
"The EpiDoc guidelines are emerging as one standard for digital epigraphy with the TEI.",
"EpiDoc is not the only possible way to use the TEI for epigraphic texts but the tools, documentation, and examples 12 make it a good environment for new digitization projects as ours.",
"EpiDoc structure An EpiDoc document is structured as a standard TEI document with the teiHeader element including some initial Desc sections (fileDesc, encodingDesc, profileDesc, revisionDesc, etc) containing metadata, general information and descriptions (here we annotated place, period, kind of support and specific objects/fragments IDs).",
"An interesting use of encodingDesc is shown in Figure 1 above: the gliph element has to be defined inside its parent element charDecl and its grandparent element encodingDesc.",
"The teiHeader element is followed by the text element including the body element composed by a series of unnumbered <div>s, distinguished by their type attributes (we show an example of the Epidoc <div> element in Figure 2 ).",
"Typical divisions might include: • text itself (type=\"edition\"); • translation (type=\"translation\"); 11 http://www.eagle-eagle.it/ 12 http://wiki.tei-c.org/index.php/ Samples_of_TEI_texts • description (type=\"description\"; • commentary (type=\"commentary\"); • historical information(type=\"history\"); • bibliography (type=\"bibliography\").",
"<div lang=\"minoan\" n=\"text\" type=\"edition\" part=\"N\" sample=\"complete\" org=\"uniform\"> <head lang=\"eng\">Edition</head> <cb rend=\"front\" n=\"HM 1673\"/> <ab part=\"N\"> <lb n=\"1\"/> <w part=\"N\"> </w> <space dim=\"horizontal\" extent=\"1em\" unit=\"character\"/> <w part=\"N\"> </w> <lb n=\"2\"/> <w part=\"N\"> </w> <g ref=\"#n5\"/> <w part=\"N\"> </w> <lb n=\"3\"/> <w part=\"N\"> </w> <g ref=\"#n12\"/> <w part=\"N\"> </w> <lb n=\"4\"/> <w part=\"N\"> </w> <g ref=\"#n6\"/> <lb n=\"5\"/> <w part=\"N\"> </w> <lb n=\"6\"/> <g ref=\"#n4\"/> <w part=\"N\"> </w> <supplied reason=\"damage\"> </supplied> <gap extent=\"2em\" reason=\"lost\" unit=\"character\" dim=\"right\"/> </ab> </div> The EpiDoc DTD introduces a finite set of possible values for the type of a <div>, so that there is a clear distinction between sections covering different aspects, such as the commentary, the description or the archaeological history.",
"One advantage of structured markup is that editors can encode more information about how certain a particular feature is.",
"The date of an inscription, for example, can be encoded as a range of possible dates.",
"EpiDoc includes the TEI <certainty> element and the cert attribute to encourage editors to say whether or not they are completely confident of a given reading.",
"After some discussion, the EpiDoc community (Mahoney, 2007) decided that certainty should be expressed as a yes-or-no value: either the editor is certain of the reading, or not.",
"Gradual certainty is too complicated to manage and is best explained in the commentary.",
"Developing the Linear A Corpus The hope that computational approaches could help decipher Linear A, along with the evident lack of rich digital resources in this field, led us to develop this new resource.",
"In this section we describe which issues we faced and which solving strategies we used.",
"Data Collection Luckily the existence of Younger's website and GORILA volumes, together with the Raison-Pope Index, made possible a semi-automatic collection process, starting from syllabic transcriptions taken from Younger's website (with his permission), converting them in Unicode strings through Python scripts and acquiring all the metadata provided in Younger's transcriptions (location and support IDs, conservation place, periodization etc.).",
"Younger's resources on his website consist of two HTML pages, one containing inscriptions from Haghia Triada (that is the richest location in terms of documents found there) (Younger, 2000k) and the other containing documents from all the other locations (Younger, 2000l ).",
"Younger's transcriptions are well enriched with metadata.",
"The metadata convey the same information found in GORILA, including the Raison-Pope Index, plus some additional description of the support (this was not necessary in GORILA volumes, where the transcriptions are shown just next to the documents pictures) and the reference to the specific GORILA volume and pages.",
"Segmentation Issues When working on ancient writing systems, segmentation issues are expected to come up.",
"John G. Younger explains (Younger, 2000c ) that in Linear A separation is mainly indicated in two ways: first, by associating sign groups with numbers or logograms, thereby implying a separation; second, by placing a dot between two sign groups, thereby explicitly separating the sign groups or between a sign group and some other sign like a transaction sign or a logogram.",
"Younger also explains that in texts that employ a string of sign groups, dots are used to separate them and this practice is most notable on non-bureaucratic texts and especially in religious texts.",
"On his website, Younger also covers the hyphenization issue (Younger, 2000d) , explaining that in some cases we find a split across lines and the reason may involve separating prefixes from base words (the root of a sign group) or base words from their suffixes.",
"As Younger points out, this hypothesis would require evidence showing that affixes are involved.",
"The hyphenization issue is more complex to solve because a 'neutral' resource should avoid transcriptions implying a well known segmentation for Linear A sign groups.",
"In Younger's transcriptions, split sign groups are reunified in order to make it clearer when a known sign group is there.",
"Instead, our digital collection keeps the text as it is on the document, all the information about interpretations of such kind can be stored separately.",
"Obtaining Unicode transcriptions We managed to obtain Unicode encoded transcriptions by automatically converting Younger's phonetic transcriptions to GORILA transcriptions (manually checked against GORILA volumes) and then by automatically converting GORILA transcriptions to Unicode codes and printing them as Unicode characters (UTF-8 encoding).",
"In order to create the syllables-to-GORILA and the GORILA-to-Unicode dictionaries, we took into account Younger's conversion table mentioned in Subsection 2.4 and the official Unicode documentation (containing explicit Unicode-to-GORILA mapping information).",
"All these processing steps have been implemented through Python scripts.",
"XML annotation Once collected the whole corpus encoded in Unicode, we automatically added part of the XML annotation through a python script.",
"These documents have been later manually corrected and completed, checking against GORILA volumes.",
"A new Linear A font Before the Unicode 7.0 release, there was no way to visualize Unicode characters in the range 10600-1077F.",
"Even now, systems that are not updated may have trouble to visualize those characters.",
"Some implementations for Unicode support in certain contexts (for example for L A T E X's output) are not always up-to-date, so it is not obvious that the fonts for the most recent characters sets are available.",
"We decided to develop a new Linear A font, solving the main issue found in LA.ttf (wrong Unicode positions).",
"Starting from the official Unicode documentation, we created a set of symbols graphically similar to the official ones and aligned them to the right Unicode positions.",
"We decided to name the font John_Younger.ttf to show our appreciation for Younger's work.",
"He made the results of GORILA available to a wider public on digital media; this is the same goal we want to pursue by developing and distributing this font.",
"We released the font file at the following URL: http://openfontlibrary.",
"org/en/font/john-younger.",
"The Linear A Digital Corpus as cultural resource As stated by European Commission (2015) and UNESCO (2003) , the meaning of the notion of cultural heritage does not apply just to material objects and works of art, but also to 'intangible cultural heritage', as traditions and creative expressions.",
"In this perspective, linguistic corpora fit perfectly this definition; in fact, they contain information about tradition, knowledge and lifestyle of a certain culture.",
"Despite the fact that the Minoan language has not yet been deciphered, we know that the Linear A corpus provides interesting information concerning economy, commerce and religion.",
"As mentioned in Subsection 2.1, Schoep (2002) made a critical assessment of the Linear A tablets and their role in the administrative process, studying the physical supports.",
"Ruth Palmer (1995) made a deep study of commodities distributions (listing precise quantities and places) among Minoan centers, even without a full understanding of documents contents.",
"As Palmer points out, 'the ideograms for basic commodities, and the formats of the Linear A texts are similar enough to their Linear B counterparts to allow valid comparison of the types and amounts of commodities which appear in specific contexts'.",
"So, it's possible to have 'an idea of the focus of the economy' and of 'the scale and complexity of the transactions'.",
"From the linear A tablets, we can infer information about the resources management and administration system of Minoan centers.",
"Van den Kerkhof and Rem (2007) analyzed the Minoan libation formulas: religious inscriptions on cups, ladles and tables that were used in the offerings of oil and other powerful drinks at dawn.",
"The priestesses that carried out the Minoan libation ritual used all kinds of utensils, and they often inscribed their sacred formulas onto these objects.",
"Around thirty of these texts have survived (whole or in part) on libation tables, ladles and vases, written in various kinds of handwriting.",
"Transcripts of these religious inscriptions are available from Consani et al.",
"(1999) and from John G. Younger (2000m) on his website.",
"As noticed by Duhoux (1989) the Minoan libation formulas have a fixed structure with variable elements.",
"In fact, some studies (Davis, 2014) about Minoan syntax have been made by observing the sign groups order found in these regular formulas.",
"More importantly, the presence of olive-like ideograms could tell us that the Minoans used olive oil for libation (Van den Kerkhof and Rem, 2007) .",
"Beyond all these parts of the Minoan cultural heritage already available, a huge part is preserved there too: the Minoan language, with its hidden stories reflecting the life of a civilization.",
"We hope that our contribution can be useful to the community and that the Minoan, in its digital form, may finally be deciphered through computational approaches.",
"Future Work We are working on XSL style sheets in order to create suitable HTML pages.",
"All the data will be freely available and published at the following URL: http://ling.ied.edu.HK/ gregoire/lineara.",
"A further step will be developing a web interface to annotate, and dynamically enrich the corpus information."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"2.3",
"2.4",
"2.5",
"2.6",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"3.5",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"4.5",
"5",
"6"
],
"paper_header_content": [
"Introduction to Linear A and Minoan",
"Linear A available resources",
"Linear A documents",
"Godart and Olivier's Collection of Linear A Inscriptions",
"John G. Younger's website",
"GORILA symbols catalogue",
"Linear A Font",
"Unicode Linear A Characters Set",
"Corpus data format",
"Why Digital?",
"Why Unicode?",
"Why XML?",
"Why EpiDoc?",
"EpiDoc structure",
"Developing the Linear A Corpus",
"Data Collection",
"Segmentation Issues",
"Obtaining Unicode transcriptions",
"XML annotation",
"A new Linear A font",
"The Linear A Digital Corpus as cultural resource",
"Future Work"
]
} | GEM-SciDuet-train-29#paper-1038#slide-2 | Lack in digital resources | After decades no deciphering attempts have been successful
No heavy computational approaches have been attempted
Only John G. Younger, in his website, provides a complete digital collection
Nevertheless, it is stored in two simple HTML pages with not strict structure and transcribed as transliterations
A new digital corpus in a suitable format and well organized may be a useful resource
Petrolito, Winterstein, Perono Cacciafoco Linear A Corpus 30 July 2015 | After decades no deciphering attempts have been successful
No heavy computational approaches have been attempted
Only John G. Younger, in his website, provides a complete digital collection
Nevertheless, it is stored in two simple HTML pages with not strict structure and transcribed as transliterations
A new digital corpus in a suitable format and well organized may be a useful resource
Petrolito, Winterstein, Perono Cacciafoco Linear A Corpus 30 July 2015 | [] |
GEM-SciDuet-train-29#paper-1038#slide-3 | 1038 | Minoan linguistic resources: The Linear A Digital Corpus | This paper describes the Linear A/Minoan digital corpus and the approaches we applied to develop it. We aim to set up a suitable study resource for Linear A and Minoan. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168
],
"paper_content_text": [
"Firstly we start by introducing Linear A and Minoan in order to make it clear why we should develop a digital marked up corpus of the existing Linear A transcriptions.",
"Secondly we list and describe some of the existing resources about Linear A: Linear A documents (seals, statuettes, vessels etc.",
"), the traditional encoding systems (standard code numbers referring to distinct symbols), a Linear A font, and the newest (released on June 16th 2014) Unicode Standard Characters set for Linear A. Thirdly we explain our choice concerning the data format: why we decided to digitize the Linear A resources; why we decided to convert all the transcriptions in standard Unicode characters; why we decided to use an XML format; why we decided to implement the TEI-EpiDoc DTD.",
"Lastly we describe: the developing process (from the data collection to the issues we faced and the solving strategies); a new font we developed (synchronized with the Unicode Characters Set) in order to make the data readable even on systems that are not updated.",
"Finally, we discuss the corpus we developed in a Cultural Heritage preservation perspective and suggest some future works.",
"Introduction to Linear A and Minoan Linear A is the script used by the Minoan Civilization (Cotterell, 1980) from 2500 to 1450 BC.",
"Writing system Time span Cretan Hieroglyphic 2100 -1700 BC Linear A 2500 -1450 BC Linear B 1450 -1200 BC The Minoan Civilization arose on the island of Crete in the Aegean Sea during the Bronze Age.",
"Minoan ruins and artifacts have been found mainly in Crete but also in other Greek islands and in mainland Greece, in Bulgaria, in Turkey and in Israel.",
"Linear A is not used anymore and, even after decades of studies (it was discovered by Sir Arthur Evans around 1900 (Evans, 1909) ), it still remains undeciphered.",
"All the assumptions and hypotheses made about Linear A and Minoan (its underlying language) are mainly based on the comparison with the well known Linear B, the famous child system originated by Linear A.",
"In fact, Linear B was fully deciphered during the 1950s by Michael Ventris 1 and was found to encode an ancient Greek dialect used by the Mycenaean civilization.",
"Archaeologist Arthur Evans named the script 'Linear' because it consisted just of lines inscribed in clay (Robinson, 2009) There have been several attempts to decipher Linear A and the Minoan Language.",
"We can divide the underlying hypotheses in six groups: Greek-like language (Nagy, 1963) , distinct Indo-European branch (Owens, 1999) , Anatolian language close to Luwian (Palmer, 1958) , archaic form of Phoenician (Dietrich and Loretz, 2001) , Indo-Iranian (Faure, 1998) and Etruscan-like language (Giulio M. Facchetti and Negri, 2003) .",
"There is also an interesting attempt (Younger, 2000b) to decipher single words, specifically toponyms, by applying Linear B phonetic values to the symbols shared by both Linear A and Linear B and following the assumption that toponyms are much more likely to survive as loans in Mycenaean Greek (written in Linear B); we show an example of this approach in Table 2.",
"In the next sections we describe the available existing resources concerning Linear A and the Linear A Digital Corpus: why and how we developed it.",
"Linear A available resources Even if Linear A and Linear B were discovered more than one century ago, Linear A has not been deciphered yet.",
"Nevertheless, many scholars worked on collecting and organizing all the available data in order to study and to decipher the script and the language.",
"Probably due to the fact that only historical linguists, philologists and archaeologists attempted to collect and organize all the existing data, nowadays a rich and well organized digital corpus is still not available.",
"In this section we describe all the available Linear A resources, including both physical documents and digital data.",
"Table 3 : Indexed types of support (Younger, 2000e) .",
"Linear A documents Linear A was written on a variety of media, such as stone offering tables, gold and silver hair pins, and pots (inked and inscribed).",
"The clay documents consist of tablets, roundels, and sealings (one-hole, two-hole, and flat-based).",
"Roundels are related to a \"conveyance of a commodity, either within the central administration or between the central administration and an external party\" (Palmer, 1995; Schoep, 2002) .",
"The roundel is the record of this transaction that stays within the central administration as the commodity moves out of the transacting bureau (Hallager, 1996) .",
"Two-hole sealings probably dangled from commodities brought into the center; onehole sealings apparently dangled from papyrus/parchment documents; flat-based sealings (themselves never inscribed) were pressed against the twine that secured papyrus/parchment documents (Younger, 2000g; Schoep, 2002) as shown by photographs (Müller, 1999) , (Müller, 2002) of the imprints that survive on the underside of flat-based sealings.",
"There are 1,427 Linear A documents containing 7,362-7,396 signs, much less than the quantity of data we have for Linear B (more than 4,600 documents containing 57,398 signs) (Younger, 2000f) .",
"Godart and Olivier's Collection of Linear A Inscriptions There is a complete and organized collection of Linear A documents on a paper corpus, the GORILA Louis Godart and Jean-Pierre Olivier, Recueil des inscriptions en Linéaire A (Godart and Olivier, 1976) .",
"Godart and Olivier have indexed the documents by original location and type of support, following the Raison-Pope Index (Raison and Pope, 1971) .",
"For example, the document AP Za 1 is from AP = Apodoulou and the support type is Za = stone vessels as shown in Table 3 .",
"Younger (2000h) provides a map with all the Cretan sites and one with all the Greek non-Cretan sites (Younger, 2000i) .",
"Godart and Olivier also provide referential data about conservation places (mainly museums), and periodization (for example: EM II = Second Early Minoan).",
"Since 1976, this has been the main source of data and point of reference about Linear A documents and it has set up the basis for further studies.",
"Even recent corpora, such as the Corpus transnuméré du linéaire A (Raison and Pope, 1994) , always refer to GORILA precise volumes and pages describing each document.",
"John G. Younger's website Younger (2000j) has published a website that is the best digital resource available (there is another interesting project, never completed, on Yannis Deliyannis's website 2 ).",
"It collects most of the existing inscriptions (taking GORILA as main source of data and point of reference) transcribed as Linear B phonetic values (like the KU-NI-SU transcription above).",
"The transcriptions are kept up to date and a complete restructuring in June 2015 has been announced (Younger, 2000j) .",
"GORILA symbols catalogue Many transcription systems have been defined.",
"The first one has been proposed by Raison and Pope (1971) and uses a string composed by one or two characters (Lm, L or Lc depending on the symbol, respectively metric, phonetic or compound) followed by a number, for example: L2.",
"This system has been widely used by many scholars such as David Woodley Packard (president of the Packard Humanities Institute 3 ), Colin Renfrew and Richard Janko (Packard, 1974; Renfrew, 1977; Janko, 1982) .",
"The second one, used in the GORILA collection (Godart and Olivier, 1976 ) and on John G. Younger's website, consists of a string composed by one or two characters (AB if the symbol is shared by Linear A and Linear B, A if the symbol is only used in Linear A) followed by a number and eventually other alphabetical characters (due to addenda and corrigenda to earlier versions), for example: AB03.",
"Many scholars transcribe the symbols shared by Linear A and B with the assumed phonetical/syllabic transcription.",
"This syllabic transcription is based on the corresponding Linear B phonetic values.",
"Younger (2000a) provides a conversion table of Pope and Raison's transcription system, GO-RILA's transcription system and his own phonetic/syllabic transcription system.",
"Developing our corpus, we worked mainly on Younger's syllabic and GORILA transcriptions, because the Unicode Linear A encoding is broadly based on the GORILA catalogue, which is also the basic set of characters used in decipherment efforts 4 .",
"We provide an example of different transcriptions for the same symbol in Table 4 .",
"As can be noticed, the Unicode encoding is based on the GORILA transcription system.",
"Linear A Font The best Linear A Font available is LA.ttf, released by D.W. Borgdorff 5 in 2004.",
"In this font some arbitrary Unicode positions for Latin characters are mapped to Linear A symbols.",
"On one hand this allows the user to type Linear A symbols directly by pressing the keys on the keyboard; on the other hand, only transliterations can be produced.",
"The text eventually typed internally will be a series of Latin characters.",
"It should be remarked that this font would not be useful to make readable a Linear A corpus that is non-translittered and encoded in Unicode.",
"Unicode Linear A Characters Set On June 16th 2014, Version 7.0 of Unicode standard was released 6 , adding 2,834 new characters and including, finally, the Linear A character set.",
"Linear A block has been set in the range 10600-1077F and the order mainly follows GORILA's one 7 , as seen in Table 4 .",
"This Unicode Set covers simple signs, vase shapes, complex signs, complex signs with vase shapes, fractions and compound fractions.",
"This is a resource that opens, for the first time, the possibility to develop a Linear A digital corpus not consisting of a transliteration or alternative transcription.",
"Corpus data format Many scholars have faced the issues for data curation and considered various possibilities.",
"Among all the possible solutions, we chose to develop the Linear A Digital Corpus as a collection of TEI-EpiDoc XML documents.",
"In this section we explain why.",
"Why Digital?",
"Many epigraphic corpora have begun to be digitalized; there are many reasons to do so.",
"A digital corpus can include several representations of the inscriptions (Mahoney, 2007) : • pictures of the original document; • pictures of drawings or transcriptions made by hand simplifying the document; • diplomatic transcriptions; • edited texts; • translations; • commentaries.",
"Building a database is enough to get much richer features than the ones a paper corpus would provide.",
"The most visible feature of an epigraphic database is its utility as an Index Universalis (Gómez Pantoja and Álvarez, 2011); unlike hand-made indexes, there is no need to constrain the number of available search-keys.",
"Needless to say, the opportunity to have the data available also on the web is valuable.",
"Why Unicode?",
"Text processing must also take into account the writing systems represented in the corpus.",
"If the corpus consists of inscriptions written in the Latin alphabet, then the writing system of the inscriptions is the same as that of the Western European modern languages used for meta-data, translations, and commentaries.",
"In our case, unluckily, we have to deal with Linear A, so we need to find a way to represent our text.",
"Scholars objected to epigraphic databases on the ground of its poor graphic ability to represent non-Latin writing systems (García Barriocanal et al., 2011) .",
"This led to the use of non-standard fonts in some databases which probed to be a bad move, compromising overall compatibility and system upgrading.",
"This approach is appealing because if the corpus needs to be printed, sooner or later fonts will be a need in all cases.",
"The font-based solution assumes that all the software involved can recognize font-change markers.",
"Unluckily, some Database Management Systems (DMSs) do not allow changes of font within a text field and some export or interchange formats lose font information.",
"When the scripts of the corpus are all supported, which will be the case for any script still used by a living language, Unicode is a better approach.",
"Despite Minoan not being a living language, Linear A is finally part of the Unicode 7.0 Character Code Charts 8 but some sign groups conventionally interpreted as numbers have no Unicode representation.",
"Why XML?",
"Until not so long ago, markup systems have always involved special typographical symbols in the text-brackets, underdots, and so on.",
"Some epigraphers see XML as a natural transformation of what they have always done, with all the additional benefits that come from standardization within the community.",
"There is a growing consensus that XML is the best way to encode text.",
"Some corpora may also use the typographical marks of the Leiden system, which has the advan-<glyph xml:id=\"n5\"> <glyphName> Number 5 </glyphName> <mapping type=\"standardized\"> 5 </mapping> </glyph> tage of being entirely familiar to the epigraphers who create and maintain the corpus.",
"Unfortunately, the special brackets, underdots, and other typographical devices may not be supported by the character set of the computer system to be used.",
"A key incentive for using XML is the ability to exchange data with other projects.",
"It is convenient to be able to divide the information in many layers: cataloging, annotating, commenting and editing the inscriptions.",
"In some cases, merging different layers from different projects could be a need (for example when each of these projects is focused on a specific layer, for which provides the best quality), as a consequence the resulting data should be in compatible forms.",
"If the projects use the same Document Type Definition (DTD), in the same way, this is relatively easy.",
"While corpora that store their texts as wordprocessor files with Leiden markup can also share data, they must agree explicitly on the details of text layout, file formats, and character encodings.",
"With XML, it is possible to define either elements or entities for unsupported characters.",
"This feature is particularly interesting in our case, giving a solution for the numbers representation (Linear A numbers, except for fractions, have no Unicode representation).",
"Suppose you want to mark up the sign group , conventionally interpreted as the number 5, in the XML.",
"As specified in the TEI DTD, this could be expressed as <g ref=\"#n5\"/>, where the element g indicates a glyph, or a non-standard character and the attribute value points to the element glyph, which contains information about the specific glyph.",
"An example is given in Figure 1 .",
"Alternatively, the project might define an entity to represent this character.",
"Either way, the XML text notes that there is the Linear A number 5, and the later rendering of the text for display or printing can substitute the appropriate character in a known font, a picture of the character, or even a numeral from a different system.",
"Such approaches assume that tools are available for these conversions; some application, transformation, or stylesheet must have a way to know how to interpret the given element or entity.",
"The usage of XML provides two advantages: in first place, it makes possible the encoding of the characters that occur in the text (as shown above); in second place, it's really useful for encoding meta-information.",
"Why EpiDoc?",
"If a project decides to use XML, the most appropriate DTD (or schema) to be used needs to be chosen.",
"As in every other humanities discipline, the basic question is whether to use a general DTD, like the TEI, or to write a project-specific one.",
"Some projects need DTDs that are extremely specific to the types of inscriptions they are dealing with, instead other projects prefer to rely on existing, widely used DTDs.",
"Mahoney (2007) has deeply analyzed all the digitization issues, taking into account all the advantages and disadvantages of different approaches; her conclusion is that it's best to use EpiDoc 9 an XML encoding tool that could be also used to write structured documents compliant with the TEI standard 10 .",
"The EpiDoc DTD is the TEI, with a few epigraphically oriented customizations made using the standard TEI mechanisms.",
"Rather than writing a DTD for epigraphy from scratch, the Epi-Doc group uses the TEI because TEI has already addressed many of the taxonomic and semantic challenges faced by epigraphers, because the TEIusing community can provide a wide range of best-practice examples and guiding expertise, and because existing tooling built around TEI could easily lead to early and effective presentation and use of TEI-encoded epigraphic texts (Mahoney, 2007) .",
"The TEI and EpiDoc approaches have already been adopted by several epigraphic projects (Bodard, 2009 ), such as the Dêmos project (Furman University) and the corpus of Macedonian and Thracian inscriptions being compiled at KERA, the Research Center for Greek and Roman Antiquity at Athens (Mahoney, 2007) .",
"Also other scholars evaluate EpiDoc as a suitable choice.",
"Felle (2011) compares the EAGLE (Electronic Archive of Greek and Latin Epigraphy 11 ) project with the EpiDoc existing resources, viewing these resources as different but complementary.",
"Álvarez et al.",
"(2010) and Gómez Pantoja and Álvarez (2011) discuss the possibility of sharing Epigraphic Information as EpiDoc-based Linked Data and describe how they implemented a relational-to-linked data solution for the Hispania Epigraphica database.",
"Cayless (2003) evaluates EpiDoc as a relevant digital tool for Epigraphy allowing for a uniform representation of epigraphic metadata.",
"The EpiDoc guidelines are emerging as one standard for digital epigraphy with the TEI.",
"EpiDoc is not the only possible way to use the TEI for epigraphic texts but the tools, documentation, and examples 12 make it a good environment for new digitization projects as ours.",
"EpiDoc structure An EpiDoc document is structured as a standard TEI document with the teiHeader element including some initial Desc sections (fileDesc, encodingDesc, profileDesc, revisionDesc, etc) containing metadata, general information and descriptions (here we annotated place, period, kind of support and specific objects/fragments IDs).",
"An interesting use of encodingDesc is shown in Figure 1 above: the gliph element has to be defined inside its parent element charDecl and its grandparent element encodingDesc.",
"The teiHeader element is followed by the text element including the body element composed by a series of unnumbered <div>s, distinguished by their type attributes (we show an example of the Epidoc <div> element in Figure 2 ).",
"Typical divisions might include: • text itself (type=\"edition\"); • translation (type=\"translation\"); 11 http://www.eagle-eagle.it/ 12 http://wiki.tei-c.org/index.php/ Samples_of_TEI_texts • description (type=\"description\"; • commentary (type=\"commentary\"); • historical information(type=\"history\"); • bibliography (type=\"bibliography\").",
"<div lang=\"minoan\" n=\"text\" type=\"edition\" part=\"N\" sample=\"complete\" org=\"uniform\"> <head lang=\"eng\">Edition</head> <cb rend=\"front\" n=\"HM 1673\"/> <ab part=\"N\"> <lb n=\"1\"/> <w part=\"N\"> </w> <space dim=\"horizontal\" extent=\"1em\" unit=\"character\"/> <w part=\"N\"> </w> <lb n=\"2\"/> <w part=\"N\"> </w> <g ref=\"#n5\"/> <w part=\"N\"> </w> <lb n=\"3\"/> <w part=\"N\"> </w> <g ref=\"#n12\"/> <w part=\"N\"> </w> <lb n=\"4\"/> <w part=\"N\"> </w> <g ref=\"#n6\"/> <lb n=\"5\"/> <w part=\"N\"> </w> <lb n=\"6\"/> <g ref=\"#n4\"/> <w part=\"N\"> </w> <supplied reason=\"damage\"> </supplied> <gap extent=\"2em\" reason=\"lost\" unit=\"character\" dim=\"right\"/> </ab> </div> The EpiDoc DTD introduces a finite set of possible values for the type of a <div>, so that there is a clear distinction between sections covering different aspects, such as the commentary, the description or the archaeological history.",
"One advantage of structured markup is that editors can encode more information about how certain a particular feature is.",
"The date of an inscription, for example, can be encoded as a range of possible dates.",
"EpiDoc includes the TEI <certainty> element and the cert attribute to encourage editors to say whether or not they are completely confident of a given reading.",
"After some discussion, the EpiDoc community (Mahoney, 2007) decided that certainty should be expressed as a yes-or-no value: either the editor is certain of the reading, or not.",
"Gradual certainty is too complicated to manage and is best explained in the commentary.",
"Developing the Linear A Corpus The hope that computational approaches could help decipher Linear A, along with the evident lack of rich digital resources in this field, led us to develop this new resource.",
"In this section we describe which issues we faced and which solving strategies we used.",
"Data Collection Luckily the existence of Younger's website and GORILA volumes, together with the Raison-Pope Index, made possible a semi-automatic collection process, starting from syllabic transcriptions taken from Younger's website (with his permission), converting them in Unicode strings through Python scripts and acquiring all the metadata provided in Younger's transcriptions (location and support IDs, conservation place, periodization etc.).",
"Younger's resources on his website consist of two HTML pages, one containing inscriptions from Haghia Triada (that is the richest location in terms of documents found there) (Younger, 2000k) and the other containing documents from all the other locations (Younger, 2000l ).",
"Younger's transcriptions are well enriched with metadata.",
"The metadata convey the same information found in GORILA, including the Raison-Pope Index, plus some additional description of the support (this was not necessary in GORILA volumes, where the transcriptions are shown just next to the documents pictures) and the reference to the specific GORILA volume and pages.",
"Segmentation Issues When working on ancient writing systems, segmentation issues are expected to come up.",
"John G. Younger explains (Younger, 2000c ) that in Linear A separation is mainly indicated in two ways: first, by associating sign groups with numbers or logograms, thereby implying a separation; second, by placing a dot between two sign groups, thereby explicitly separating the sign groups or between a sign group and some other sign like a transaction sign or a logogram.",
"Younger also explains that in texts that employ a string of sign groups, dots are used to separate them and this practice is most notable on non-bureaucratic texts and especially in religious texts.",
"On his website, Younger also covers the hyphenization issue (Younger, 2000d) , explaining that in some cases we find a split across lines and the reason may involve separating prefixes from base words (the root of a sign group) or base words from their suffixes.",
"As Younger points out, this hypothesis would require evidence showing that affixes are involved.",
"The hyphenization issue is more complex to solve because a 'neutral' resource should avoid transcriptions implying a well known segmentation for Linear A sign groups.",
"In Younger's transcriptions, split sign groups are reunified in order to make it clearer when a known sign group is there.",
"Instead, our digital collection keeps the text as it is on the document, all the information about interpretations of such kind can be stored separately.",
"Obtaining Unicode transcriptions We managed to obtain Unicode encoded transcriptions by automatically converting Younger's phonetic transcriptions to GORILA transcriptions (manually checked against GORILA volumes) and then by automatically converting GORILA transcriptions to Unicode codes and printing them as Unicode characters (UTF-8 encoding).",
"In order to create the syllables-to-GORILA and the GORILA-to-Unicode dictionaries, we took into account Younger's conversion table mentioned in Subsection 2.4 and the official Unicode documentation (containing explicit Unicode-to-GORILA mapping information).",
"All these processing steps have been implemented through Python scripts.",
"XML annotation Once collected the whole corpus encoded in Unicode, we automatically added part of the XML annotation through a python script.",
"These documents have been later manually corrected and completed, checking against GORILA volumes.",
"A new Linear A font Before the Unicode 7.0 release, there was no way to visualize Unicode characters in the range 10600-1077F.",
"Even now, systems that are not updated may have trouble to visualize those characters.",
"Some implementations for Unicode support in certain contexts (for example for L A T E X's output) are not always up-to-date, so it is not obvious that the fonts for the most recent characters sets are available.",
"We decided to develop a new Linear A font, solving the main issue found in LA.ttf (wrong Unicode positions).",
"Starting from the official Unicode documentation, we created a set of symbols graphically similar to the official ones and aligned them to the right Unicode positions.",
"We decided to name the font John_Younger.ttf to show our appreciation for Younger's work.",
"He made the results of GORILA available to a wider public on digital media; this is the same goal we want to pursue by developing and distributing this font.",
"We released the font file at the following URL: http://openfontlibrary.",
"org/en/font/john-younger.",
"The Linear A Digital Corpus as cultural resource As stated by European Commission (2015) and UNESCO (2003) , the meaning of the notion of cultural heritage does not apply just to material objects and works of art, but also to 'intangible cultural heritage', as traditions and creative expressions.",
"In this perspective, linguistic corpora fit perfectly this definition; in fact, they contain information about tradition, knowledge and lifestyle of a certain culture.",
"Despite the fact that the Minoan language has not yet been deciphered, we know that the Linear A corpus provides interesting information concerning economy, commerce and religion.",
"As mentioned in Subsection 2.1, Schoep (2002) made a critical assessment of the Linear A tablets and their role in the administrative process, studying the physical supports.",
"Ruth Palmer (1995) made a deep study of commodities distributions (listing precise quantities and places) among Minoan centers, even without a full understanding of documents contents.",
"As Palmer points out, 'the ideograms for basic commodities, and the formats of the Linear A texts are similar enough to their Linear B counterparts to allow valid comparison of the types and amounts of commodities which appear in specific contexts'.",
"So, it's possible to have 'an idea of the focus of the economy' and of 'the scale and complexity of the transactions'.",
"From the linear A tablets, we can infer information about the resources management and administration system of Minoan centers.",
"Van den Kerkhof and Rem (2007) analyzed the Minoan libation formulas: religious inscriptions on cups, ladles and tables that were used in the offerings of oil and other powerful drinks at dawn.",
"The priestesses that carried out the Minoan libation ritual used all kinds of utensils, and they often inscribed their sacred formulas onto these objects.",
"Around thirty of these texts have survived (whole or in part) on libation tables, ladles and vases, written in various kinds of handwriting.",
"Transcripts of these religious inscriptions are available from Consani et al.",
"(1999) and from John G. Younger (2000m) on his website.",
"As noticed by Duhoux (1989) the Minoan libation formulas have a fixed structure with variable elements.",
"In fact, some studies (Davis, 2014) about Minoan syntax have been made by observing the sign groups order found in these regular formulas.",
"More importantly, the presence of olive-like ideograms could tell us that the Minoans used olive oil for libation (Van den Kerkhof and Rem, 2007) .",
"Beyond all these parts of the Minoan cultural heritage already available, a huge part is preserved there too: the Minoan language, with its hidden stories reflecting the life of a civilization.",
"We hope that our contribution can be useful to the community and that the Minoan, in its digital form, may finally be deciphered through computational approaches.",
"Future Work We are working on XSL style sheets in order to create suitable HTML pages.",
"All the data will be freely available and published at the following URL: http://ling.ied.edu.HK/ gregoire/lineara.",
"A further step will be developing a web interface to annotate, and dynamically enrich the corpus information."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"2.3",
"2.4",
"2.5",
"2.6",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"3.5",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"4.5",
"5",
"6"
],
"paper_header_content": [
"Introduction to Linear A and Minoan",
"Linear A available resources",
"Linear A documents",
"Godart and Olivier's Collection of Linear A Inscriptions",
"John G. Younger's website",
"GORILA symbols catalogue",
"Linear A Font",
"Unicode Linear A Characters Set",
"Corpus data format",
"Why Digital?",
"Why Unicode?",
"Why XML?",
"Why EpiDoc?",
"EpiDoc structure",
"Developing the Linear A Corpus",
"Data Collection",
"Segmentation Issues",
"Obtaining Unicode transcriptions",
"XML annotation",
"A new Linear A font",
"The Linear A Digital Corpus as cultural resource",
"Future Work"
]
} | GEM-SciDuet-train-29#paper-1038#slide-3 | Available resources | (about 2 A4 pages of text at 11pt)
GORILA paper collection of inscriptions and transcriptions
John G. Youngers website
Petrolito, Winterstein, Perono Cacciafoco Linear A Corpus 30 July 2015 | (about 2 A4 pages of text at 11pt)
GORILA paper collection of inscriptions and transcriptions
John G. Youngers website
Petrolito, Winterstein, Perono Cacciafoco Linear A Corpus 30 July 2015 | [] |
GEM-SciDuet-train-29#paper-1038#slide-4 | 1038 | Minoan linguistic resources: The Linear A Digital Corpus | This paper describes the Linear A/Minoan digital corpus and the approaches we applied to develop it. We aim to set up a suitable study resource for Linear A and Minoan. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168
],
"paper_content_text": [
"Firstly we start by introducing Linear A and Minoan in order to make it clear why we should develop a digital marked up corpus of the existing Linear A transcriptions.",
"Secondly we list and describe some of the existing resources about Linear A: Linear A documents (seals, statuettes, vessels etc.",
"), the traditional encoding systems (standard code numbers referring to distinct symbols), a Linear A font, and the newest (released on June 16th 2014) Unicode Standard Characters set for Linear A. Thirdly we explain our choice concerning the data format: why we decided to digitize the Linear A resources; why we decided to convert all the transcriptions in standard Unicode characters; why we decided to use an XML format; why we decided to implement the TEI-EpiDoc DTD.",
"Lastly we describe: the developing process (from the data collection to the issues we faced and the solving strategies); a new font we developed (synchronized with the Unicode Characters Set) in order to make the data readable even on systems that are not updated.",
"Finally, we discuss the corpus we developed in a Cultural Heritage preservation perspective and suggest some future works.",
"Introduction to Linear A and Minoan Linear A is the script used by the Minoan Civilization (Cotterell, 1980) from 2500 to 1450 BC.",
"Writing system Time span Cretan Hieroglyphic 2100 -1700 BC Linear A 2500 -1450 BC Linear B 1450 -1200 BC The Minoan Civilization arose on the island of Crete in the Aegean Sea during the Bronze Age.",
"Minoan ruins and artifacts have been found mainly in Crete but also in other Greek islands and in mainland Greece, in Bulgaria, in Turkey and in Israel.",
"Linear A is not used anymore and, even after decades of studies (it was discovered by Sir Arthur Evans around 1900 (Evans, 1909) ), it still remains undeciphered.",
"All the assumptions and hypotheses made about Linear A and Minoan (its underlying language) are mainly based on the comparison with the well known Linear B, the famous child system originated by Linear A.",
"In fact, Linear B was fully deciphered during the 1950s by Michael Ventris 1 and was found to encode an ancient Greek dialect used by the Mycenaean civilization.",
"Archaeologist Arthur Evans named the script 'Linear' because it consisted just of lines inscribed in clay (Robinson, 2009) There have been several attempts to decipher Linear A and the Minoan Language.",
"We can divide the underlying hypotheses in six groups: Greek-like language (Nagy, 1963) , distinct Indo-European branch (Owens, 1999) , Anatolian language close to Luwian (Palmer, 1958) , archaic form of Phoenician (Dietrich and Loretz, 2001) , Indo-Iranian (Faure, 1998) and Etruscan-like language (Giulio M. Facchetti and Negri, 2003) .",
"There is also an interesting attempt (Younger, 2000b) to decipher single words, specifically toponyms, by applying Linear B phonetic values to the symbols shared by both Linear A and Linear B and following the assumption that toponyms are much more likely to survive as loans in Mycenaean Greek (written in Linear B); we show an example of this approach in Table 2.",
"In the next sections we describe the available existing resources concerning Linear A and the Linear A Digital Corpus: why and how we developed it.",
"Linear A available resources Even if Linear A and Linear B were discovered more than one century ago, Linear A has not been deciphered yet.",
"Nevertheless, many scholars worked on collecting and organizing all the available data in order to study and to decipher the script and the language.",
"Probably due to the fact that only historical linguists, philologists and archaeologists attempted to collect and organize all the existing data, nowadays a rich and well organized digital corpus is still not available.",
"In this section we describe all the available Linear A resources, including both physical documents and digital data.",
"Table 3 : Indexed types of support (Younger, 2000e) .",
"Linear A documents Linear A was written on a variety of media, such as stone offering tables, gold and silver hair pins, and pots (inked and inscribed).",
"The clay documents consist of tablets, roundels, and sealings (one-hole, two-hole, and flat-based).",
"Roundels are related to a \"conveyance of a commodity, either within the central administration or between the central administration and an external party\" (Palmer, 1995; Schoep, 2002) .",
"The roundel is the record of this transaction that stays within the central administration as the commodity moves out of the transacting bureau (Hallager, 1996) .",
"Two-hole sealings probably dangled from commodities brought into the center; onehole sealings apparently dangled from papyrus/parchment documents; flat-based sealings (themselves never inscribed) were pressed against the twine that secured papyrus/parchment documents (Younger, 2000g; Schoep, 2002) as shown by photographs (Müller, 1999) , (Müller, 2002) of the imprints that survive on the underside of flat-based sealings.",
"There are 1,427 Linear A documents containing 7,362-7,396 signs, much less than the quantity of data we have for Linear B (more than 4,600 documents containing 57,398 signs) (Younger, 2000f) .",
"Godart and Olivier's Collection of Linear A Inscriptions There is a complete and organized collection of Linear A documents on a paper corpus, the GORILA Louis Godart and Jean-Pierre Olivier, Recueil des inscriptions en Linéaire A (Godart and Olivier, 1976) .",
"Godart and Olivier have indexed the documents by original location and type of support, following the Raison-Pope Index (Raison and Pope, 1971) .",
"For example, the document AP Za 1 is from AP = Apodoulou and the support type is Za = stone vessels as shown in Table 3 .",
"Younger (2000h) provides a map with all the Cretan sites and one with all the Greek non-Cretan sites (Younger, 2000i) .",
"Godart and Olivier also provide referential data about conservation places (mainly museums), and periodization (for example: EM II = Second Early Minoan).",
"Since 1976, this has been the main source of data and point of reference about Linear A documents and it has set up the basis for further studies.",
"Even recent corpora, such as the Corpus transnuméré du linéaire A (Raison and Pope, 1994) , always refer to GORILA precise volumes and pages describing each document.",
"John G. Younger's website Younger (2000j) has published a website that is the best digital resource available (there is another interesting project, never completed, on Yannis Deliyannis's website 2 ).",
"It collects most of the existing inscriptions (taking GORILA as main source of data and point of reference) transcribed as Linear B phonetic values (like the KU-NI-SU transcription above).",
"The transcriptions are kept up to date and a complete restructuring in June 2015 has been announced (Younger, 2000j) .",
"GORILA symbols catalogue Many transcription systems have been defined.",
"The first one has been proposed by Raison and Pope (1971) and uses a string composed by one or two characters (Lm, L or Lc depending on the symbol, respectively metric, phonetic or compound) followed by a number, for example: L2.",
"This system has been widely used by many scholars such as David Woodley Packard (president of the Packard Humanities Institute 3 ), Colin Renfrew and Richard Janko (Packard, 1974; Renfrew, 1977; Janko, 1982) .",
"The second one, used in the GORILA collection (Godart and Olivier, 1976 ) and on John G. Younger's website, consists of a string composed by one or two characters (AB if the symbol is shared by Linear A and Linear B, A if the symbol is only used in Linear A) followed by a number and eventually other alphabetical characters (due to addenda and corrigenda to earlier versions), for example: AB03.",
"Many scholars transcribe the symbols shared by Linear A and B with the assumed phonetical/syllabic transcription.",
"This syllabic transcription is based on the corresponding Linear B phonetic values.",
"Younger (2000a) provides a conversion table of Pope and Raison's transcription system, GO-RILA's transcription system and his own phonetic/syllabic transcription system.",
"Developing our corpus, we worked mainly on Younger's syllabic and GORILA transcriptions, because the Unicode Linear A encoding is broadly based on the GORILA catalogue, which is also the basic set of characters used in decipherment efforts 4 .",
"We provide an example of different transcriptions for the same symbol in Table 4 .",
"As can be noticed, the Unicode encoding is based on the GORILA transcription system.",
"Linear A Font The best Linear A Font available is LA.ttf, released by D.W. Borgdorff 5 in 2004.",
"In this font some arbitrary Unicode positions for Latin characters are mapped to Linear A symbols.",
"On one hand this allows the user to type Linear A symbols directly by pressing the keys on the keyboard; on the other hand, only transliterations can be produced.",
"The text eventually typed internally will be a series of Latin characters.",
"It should be remarked that this font would not be useful to make readable a Linear A corpus that is non-translittered and encoded in Unicode.",
"Unicode Linear A Characters Set On June 16th 2014, Version 7.0 of Unicode standard was released 6 , adding 2,834 new characters and including, finally, the Linear A character set.",
"Linear A block has been set in the range 10600-1077F and the order mainly follows GORILA's one 7 , as seen in Table 4 .",
"This Unicode Set covers simple signs, vase shapes, complex signs, complex signs with vase shapes, fractions and compound fractions.",
"This is a resource that opens, for the first time, the possibility to develop a Linear A digital corpus not consisting of a transliteration or alternative transcription.",
"Corpus data format Many scholars have faced the issues for data curation and considered various possibilities.",
"Among all the possible solutions, we chose to develop the Linear A Digital Corpus as a collection of TEI-EpiDoc XML documents.",
"In this section we explain why.",
"Why Digital?",
"Many epigraphic corpora have begun to be digitalized; there are many reasons to do so.",
"A digital corpus can include several representations of the inscriptions (Mahoney, 2007) : • pictures of the original document; • pictures of drawings or transcriptions made by hand simplifying the document; • diplomatic transcriptions; • edited texts; • translations; • commentaries.",
"Building a database is enough to get much richer features than the ones a paper corpus would provide.",
"The most visible feature of an epigraphic database is its utility as an Index Universalis (Gómez Pantoja and Álvarez, 2011); unlike hand-made indexes, there is no need to constrain the number of available search-keys.",
"Needless to say, the opportunity to have the data available also on the web is valuable.",
"Why Unicode?",
"Text processing must also take into account the writing systems represented in the corpus.",
"If the corpus consists of inscriptions written in the Latin alphabet, then the writing system of the inscriptions is the same as that of the Western European modern languages used for meta-data, translations, and commentaries.",
"In our case, unluckily, we have to deal with Linear A, so we need to find a way to represent our text.",
"Scholars objected to epigraphic databases on the ground of its poor graphic ability to represent non-Latin writing systems (García Barriocanal et al., 2011) .",
"This led to the use of non-standard fonts in some databases which probed to be a bad move, compromising overall compatibility and system upgrading.",
"This approach is appealing because if the corpus needs to be printed, sooner or later fonts will be a need in all cases.",
"The font-based solution assumes that all the software involved can recognize font-change markers.",
"Unluckily, some Database Management Systems (DMSs) do not allow changes of font within a text field and some export or interchange formats lose font information.",
"When the scripts of the corpus are all supported, which will be the case for any script still used by a living language, Unicode is a better approach.",
"Despite Minoan not being a living language, Linear A is finally part of the Unicode 7.0 Character Code Charts 8 but some sign groups conventionally interpreted as numbers have no Unicode representation.",
"Why XML?",
"Until not so long ago, markup systems have always involved special typographical symbols in the text-brackets, underdots, and so on.",
"Some epigraphers see XML as a natural transformation of what they have always done, with all the additional benefits that come from standardization within the community.",
"There is a growing consensus that XML is the best way to encode text.",
"Some corpora may also use the typographical marks of the Leiden system, which has the advan-<glyph xml:id=\"n5\"> <glyphName> Number 5 </glyphName> <mapping type=\"standardized\"> 5 </mapping> </glyph> tage of being entirely familiar to the epigraphers who create and maintain the corpus.",
"Unfortunately, the special brackets, underdots, and other typographical devices may not be supported by the character set of the computer system to be used.",
"A key incentive for using XML is the ability to exchange data with other projects.",
"It is convenient to be able to divide the information in many layers: cataloging, annotating, commenting and editing the inscriptions.",
"In some cases, merging different layers from different projects could be a need (for example when each of these projects is focused on a specific layer, for which provides the best quality), as a consequence the resulting data should be in compatible forms.",
"If the projects use the same Document Type Definition (DTD), in the same way, this is relatively easy.",
"While corpora that store their texts as wordprocessor files with Leiden markup can also share data, they must agree explicitly on the details of text layout, file formats, and character encodings.",
"With XML, it is possible to define either elements or entities for unsupported characters.",
"This feature is particularly interesting in our case, giving a solution for the numbers representation (Linear A numbers, except for fractions, have no Unicode representation).",
"Suppose you want to mark up the sign group , conventionally interpreted as the number 5, in the XML.",
"As specified in the TEI DTD, this could be expressed as <g ref=\"#n5\"/>, where the element g indicates a glyph, or a non-standard character and the attribute value points to the element glyph, which contains information about the specific glyph.",
"An example is given in Figure 1 .",
"Alternatively, the project might define an entity to represent this character.",
"Either way, the XML text notes that there is the Linear A number 5, and the later rendering of the text for display or printing can substitute the appropriate character in a known font, a picture of the character, or even a numeral from a different system.",
"Such approaches assume that tools are available for these conversions; some application, transformation, or stylesheet must have a way to know how to interpret the given element or entity.",
"The usage of XML provides two advantages: in first place, it makes possible the encoding of the characters that occur in the text (as shown above); in second place, it's really useful for encoding meta-information.",
"Why EpiDoc?",
"If a project decides to use XML, the most appropriate DTD (or schema) to be used needs to be chosen.",
"As in every other humanities discipline, the basic question is whether to use a general DTD, like the TEI, or to write a project-specific one.",
"Some projects need DTDs that are extremely specific to the types of inscriptions they are dealing with, instead other projects prefer to rely on existing, widely used DTDs.",
"Mahoney (2007) has deeply analyzed all the digitization issues, taking into account all the advantages and disadvantages of different approaches; her conclusion is that it's best to use EpiDoc 9 an XML encoding tool that could be also used to write structured documents compliant with the TEI standard 10 .",
"The EpiDoc DTD is the TEI, with a few epigraphically oriented customizations made using the standard TEI mechanisms.",
"Rather than writing a DTD for epigraphy from scratch, the Epi-Doc group uses the TEI because TEI has already addressed many of the taxonomic and semantic challenges faced by epigraphers, because the TEIusing community can provide a wide range of best-practice examples and guiding expertise, and because existing tooling built around TEI could easily lead to early and effective presentation and use of TEI-encoded epigraphic texts (Mahoney, 2007) .",
"The TEI and EpiDoc approaches have already been adopted by several epigraphic projects (Bodard, 2009 ), such as the Dêmos project (Furman University) and the corpus of Macedonian and Thracian inscriptions being compiled at KERA, the Research Center for Greek and Roman Antiquity at Athens (Mahoney, 2007) .",
"Also other scholars evaluate EpiDoc as a suitable choice.",
"Felle (2011) compares the EAGLE (Electronic Archive of Greek and Latin Epigraphy 11 ) project with the EpiDoc existing resources, viewing these resources as different but complementary.",
"Álvarez et al.",
"(2010) and Gómez Pantoja and Álvarez (2011) discuss the possibility of sharing Epigraphic Information as EpiDoc-based Linked Data and describe how they implemented a relational-to-linked data solution for the Hispania Epigraphica database.",
"Cayless (2003) evaluates EpiDoc as a relevant digital tool for Epigraphy allowing for a uniform representation of epigraphic metadata.",
"The EpiDoc guidelines are emerging as one standard for digital epigraphy with the TEI.",
"EpiDoc is not the only possible way to use the TEI for epigraphic texts but the tools, documentation, and examples 12 make it a good environment for new digitization projects as ours.",
"EpiDoc structure An EpiDoc document is structured as a standard TEI document with the teiHeader element including some initial Desc sections (fileDesc, encodingDesc, profileDesc, revisionDesc, etc) containing metadata, general information and descriptions (here we annotated place, period, kind of support and specific objects/fragments IDs).",
"An interesting use of encodingDesc is shown in Figure 1 above: the gliph element has to be defined inside its parent element charDecl and its grandparent element encodingDesc.",
"The teiHeader element is followed by the text element including the body element composed by a series of unnumbered <div>s, distinguished by their type attributes (we show an example of the Epidoc <div> element in Figure 2 ).",
"Typical divisions might include: • text itself (type=\"edition\"); • translation (type=\"translation\"); 11 http://www.eagle-eagle.it/ 12 http://wiki.tei-c.org/index.php/ Samples_of_TEI_texts • description (type=\"description\"; • commentary (type=\"commentary\"); • historical information(type=\"history\"); • bibliography (type=\"bibliography\").",
"<div lang=\"minoan\" n=\"text\" type=\"edition\" part=\"N\" sample=\"complete\" org=\"uniform\"> <head lang=\"eng\">Edition</head> <cb rend=\"front\" n=\"HM 1673\"/> <ab part=\"N\"> <lb n=\"1\"/> <w part=\"N\"> </w> <space dim=\"horizontal\" extent=\"1em\" unit=\"character\"/> <w part=\"N\"> </w> <lb n=\"2\"/> <w part=\"N\"> </w> <g ref=\"#n5\"/> <w part=\"N\"> </w> <lb n=\"3\"/> <w part=\"N\"> </w> <g ref=\"#n12\"/> <w part=\"N\"> </w> <lb n=\"4\"/> <w part=\"N\"> </w> <g ref=\"#n6\"/> <lb n=\"5\"/> <w part=\"N\"> </w> <lb n=\"6\"/> <g ref=\"#n4\"/> <w part=\"N\"> </w> <supplied reason=\"damage\"> </supplied> <gap extent=\"2em\" reason=\"lost\" unit=\"character\" dim=\"right\"/> </ab> </div> The EpiDoc DTD introduces a finite set of possible values for the type of a <div>, so that there is a clear distinction between sections covering different aspects, such as the commentary, the description or the archaeological history.",
"One advantage of structured markup is that editors can encode more information about how certain a particular feature is.",
"The date of an inscription, for example, can be encoded as a range of possible dates.",
"EpiDoc includes the TEI <certainty> element and the cert attribute to encourage editors to say whether or not they are completely confident of a given reading.",
"After some discussion, the EpiDoc community (Mahoney, 2007) decided that certainty should be expressed as a yes-or-no value: either the editor is certain of the reading, or not.",
"Gradual certainty is too complicated to manage and is best explained in the commentary.",
"Developing the Linear A Corpus The hope that computational approaches could help decipher Linear A, along with the evident lack of rich digital resources in this field, led us to develop this new resource.",
"In this section we describe which issues we faced and which solving strategies we used.",
"Data Collection Luckily the existence of Younger's website and GORILA volumes, together with the Raison-Pope Index, made possible a semi-automatic collection process, starting from syllabic transcriptions taken from Younger's website (with his permission), converting them in Unicode strings through Python scripts and acquiring all the metadata provided in Younger's transcriptions (location and support IDs, conservation place, periodization etc.).",
"Younger's resources on his website consist of two HTML pages, one containing inscriptions from Haghia Triada (that is the richest location in terms of documents found there) (Younger, 2000k) and the other containing documents from all the other locations (Younger, 2000l ).",
"Younger's transcriptions are well enriched with metadata.",
"The metadata convey the same information found in GORILA, including the Raison-Pope Index, plus some additional description of the support (this was not necessary in GORILA volumes, where the transcriptions are shown just next to the documents pictures) and the reference to the specific GORILA volume and pages.",
"Segmentation Issues When working on ancient writing systems, segmentation issues are expected to come up.",
"John G. Younger explains (Younger, 2000c ) that in Linear A separation is mainly indicated in two ways: first, by associating sign groups with numbers or logograms, thereby implying a separation; second, by placing a dot between two sign groups, thereby explicitly separating the sign groups or between a sign group and some other sign like a transaction sign or a logogram.",
"Younger also explains that in texts that employ a string of sign groups, dots are used to separate them and this practice is most notable on non-bureaucratic texts and especially in religious texts.",
"On his website, Younger also covers the hyphenization issue (Younger, 2000d) , explaining that in some cases we find a split across lines and the reason may involve separating prefixes from base words (the root of a sign group) or base words from their suffixes.",
"As Younger points out, this hypothesis would require evidence showing that affixes are involved.",
"The hyphenization issue is more complex to solve because a 'neutral' resource should avoid transcriptions implying a well known segmentation for Linear A sign groups.",
"In Younger's transcriptions, split sign groups are reunified in order to make it clearer when a known sign group is there.",
"Instead, our digital collection keeps the text as it is on the document, all the information about interpretations of such kind can be stored separately.",
"Obtaining Unicode transcriptions We managed to obtain Unicode encoded transcriptions by automatically converting Younger's phonetic transcriptions to GORILA transcriptions (manually checked against GORILA volumes) and then by automatically converting GORILA transcriptions to Unicode codes and printing them as Unicode characters (UTF-8 encoding).",
"In order to create the syllables-to-GORILA and the GORILA-to-Unicode dictionaries, we took into account Younger's conversion table mentioned in Subsection 2.4 and the official Unicode documentation (containing explicit Unicode-to-GORILA mapping information).",
"All these processing steps have been implemented through Python scripts.",
"XML annotation Once collected the whole corpus encoded in Unicode, we automatically added part of the XML annotation through a python script.",
"These documents have been later manually corrected and completed, checking against GORILA volumes.",
"A new Linear A font Before the Unicode 7.0 release, there was no way to visualize Unicode characters in the range 10600-1077F.",
"Even now, systems that are not updated may have trouble to visualize those characters.",
"Some implementations for Unicode support in certain contexts (for example for L A T E X's output) are not always up-to-date, so it is not obvious that the fonts for the most recent characters sets are available.",
"We decided to develop a new Linear A font, solving the main issue found in LA.ttf (wrong Unicode positions).",
"Starting from the official Unicode documentation, we created a set of symbols graphically similar to the official ones and aligned them to the right Unicode positions.",
"We decided to name the font John_Younger.ttf to show our appreciation for Younger's work.",
"He made the results of GORILA available to a wider public on digital media; this is the same goal we want to pursue by developing and distributing this font.",
"We released the font file at the following URL: http://openfontlibrary.",
"org/en/font/john-younger.",
"The Linear A Digital Corpus as cultural resource As stated by European Commission (2015) and UNESCO (2003) , the meaning of the notion of cultural heritage does not apply just to material objects and works of art, but also to 'intangible cultural heritage', as traditions and creative expressions.",
"In this perspective, linguistic corpora fit perfectly this definition; in fact, they contain information about tradition, knowledge and lifestyle of a certain culture.",
"Despite the fact that the Minoan language has not yet been deciphered, we know that the Linear A corpus provides interesting information concerning economy, commerce and religion.",
"As mentioned in Subsection 2.1, Schoep (2002) made a critical assessment of the Linear A tablets and their role in the administrative process, studying the physical supports.",
"Ruth Palmer (1995) made a deep study of commodities distributions (listing precise quantities and places) among Minoan centers, even without a full understanding of documents contents.",
"As Palmer points out, 'the ideograms for basic commodities, and the formats of the Linear A texts are similar enough to their Linear B counterparts to allow valid comparison of the types and amounts of commodities which appear in specific contexts'.",
"So, it's possible to have 'an idea of the focus of the economy' and of 'the scale and complexity of the transactions'.",
"From the linear A tablets, we can infer information about the resources management and administration system of Minoan centers.",
"Van den Kerkhof and Rem (2007) analyzed the Minoan libation formulas: religious inscriptions on cups, ladles and tables that were used in the offerings of oil and other powerful drinks at dawn.",
"The priestesses that carried out the Minoan libation ritual used all kinds of utensils, and they often inscribed their sacred formulas onto these objects.",
"Around thirty of these texts have survived (whole or in part) on libation tables, ladles and vases, written in various kinds of handwriting.",
"Transcripts of these religious inscriptions are available from Consani et al.",
"(1999) and from John G. Younger (2000m) on his website.",
"As noticed by Duhoux (1989) the Minoan libation formulas have a fixed structure with variable elements.",
"In fact, some studies (Davis, 2014) about Minoan syntax have been made by observing the sign groups order found in these regular formulas.",
"More importantly, the presence of olive-like ideograms could tell us that the Minoans used olive oil for libation (Van den Kerkhof and Rem, 2007) .",
"Beyond all these parts of the Minoan cultural heritage already available, a huge part is preserved there too: the Minoan language, with its hidden stories reflecting the life of a civilization.",
"We hope that our contribution can be useful to the community and that the Minoan, in its digital form, may finally be deciphered through computational approaches.",
"Future Work We are working on XSL style sheets in order to create suitable HTML pages.",
"All the data will be freely available and published at the following URL: http://ling.ied.edu.HK/ gregoire/lineara.",
"A further step will be developing a web interface to annotate, and dynamically enrich the corpus information."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"2.3",
"2.4",
"2.5",
"2.6",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"3.5",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"4.5",
"5",
"6"
],
"paper_header_content": [
"Introduction to Linear A and Minoan",
"Linear A available resources",
"Linear A documents",
"Godart and Olivier's Collection of Linear A Inscriptions",
"John G. Younger's website",
"GORILA symbols catalogue",
"Linear A Font",
"Unicode Linear A Characters Set",
"Corpus data format",
"Why Digital?",
"Why Unicode?",
"Why XML?",
"Why EpiDoc?",
"EpiDoc structure",
"Developing the Linear A Corpus",
"Data Collection",
"Segmentation Issues",
"Obtaining Unicode transcriptions",
"XML annotation",
"A new Linear A font",
"The Linear A Digital Corpus as cultural resource",
"Future Work"
]
} | GEM-SciDuet-train-29#paper-1038#slide-4 | Gorila | a catalog of symbols/numeric codes documents indexes with information about original place and type of support (these indexes were defined in the first place by Pope&Raison) indexed documents descriptions including pictures, drawings and handmade transcriptions
Petrolito, Winterstein, Perono Cacciafoco Linear A Corpus 30 July 2015 | a catalog of symbols/numeric codes documents indexes with information about original place and type of support (these indexes were defined in the first place by Pope&Raison) indexed documents descriptions including pictures, drawings and handmade transcriptions
Petrolito, Winterstein, Perono Cacciafoco Linear A Corpus 30 July 2015 | [] |
GEM-SciDuet-train-29#paper-1038#slide-5 | 1038 | Minoan linguistic resources: The Linear A Digital Corpus | This paper describes the Linear A/Minoan digital corpus and the approaches we applied to develop it. We aim to set up a suitable study resource for Linear A and Minoan. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168
],
"paper_content_text": [
"Firstly we start by introducing Linear A and Minoan in order to make it clear why we should develop a digital marked up corpus of the existing Linear A transcriptions.",
"Secondly we list and describe some of the existing resources about Linear A: Linear A documents (seals, statuettes, vessels etc.",
"), the traditional encoding systems (standard code numbers referring to distinct symbols), a Linear A font, and the newest (released on June 16th 2014) Unicode Standard Characters set for Linear A. Thirdly we explain our choice concerning the data format: why we decided to digitize the Linear A resources; why we decided to convert all the transcriptions in standard Unicode characters; why we decided to use an XML format; why we decided to implement the TEI-EpiDoc DTD.",
"Lastly we describe: the developing process (from the data collection to the issues we faced and the solving strategies); a new font we developed (synchronized with the Unicode Characters Set) in order to make the data readable even on systems that are not updated.",
"Finally, we discuss the corpus we developed in a Cultural Heritage preservation perspective and suggest some future works.",
"Introduction to Linear A and Minoan Linear A is the script used by the Minoan Civilization (Cotterell, 1980) from 2500 to 1450 BC.",
"Writing system Time span Cretan Hieroglyphic 2100 -1700 BC Linear A 2500 -1450 BC Linear B 1450 -1200 BC The Minoan Civilization arose on the island of Crete in the Aegean Sea during the Bronze Age.",
"Minoan ruins and artifacts have been found mainly in Crete but also in other Greek islands and in mainland Greece, in Bulgaria, in Turkey and in Israel.",
"Linear A is not used anymore and, even after decades of studies (it was discovered by Sir Arthur Evans around 1900 (Evans, 1909) ), it still remains undeciphered.",
"All the assumptions and hypotheses made about Linear A and Minoan (its underlying language) are mainly based on the comparison with the well known Linear B, the famous child system originated by Linear A.",
"In fact, Linear B was fully deciphered during the 1950s by Michael Ventris 1 and was found to encode an ancient Greek dialect used by the Mycenaean civilization.",
"Archaeologist Arthur Evans named the script 'Linear' because it consisted just of lines inscribed in clay (Robinson, 2009) There have been several attempts to decipher Linear A and the Minoan Language.",
"We can divide the underlying hypotheses in six groups: Greek-like language (Nagy, 1963) , distinct Indo-European branch (Owens, 1999) , Anatolian language close to Luwian (Palmer, 1958) , archaic form of Phoenician (Dietrich and Loretz, 2001) , Indo-Iranian (Faure, 1998) and Etruscan-like language (Giulio M. Facchetti and Negri, 2003) .",
"There is also an interesting attempt (Younger, 2000b) to decipher single words, specifically toponyms, by applying Linear B phonetic values to the symbols shared by both Linear A and Linear B and following the assumption that toponyms are much more likely to survive as loans in Mycenaean Greek (written in Linear B); we show an example of this approach in Table 2.",
"In the next sections we describe the available existing resources concerning Linear A and the Linear A Digital Corpus: why and how we developed it.",
"Linear A available resources Even if Linear A and Linear B were discovered more than one century ago, Linear A has not been deciphered yet.",
"Nevertheless, many scholars worked on collecting and organizing all the available data in order to study and to decipher the script and the language.",
"Probably due to the fact that only historical linguists, philologists and archaeologists attempted to collect and organize all the existing data, nowadays a rich and well organized digital corpus is still not available.",
"In this section we describe all the available Linear A resources, including both physical documents and digital data.",
"Table 3 : Indexed types of support (Younger, 2000e) .",
"Linear A documents Linear A was written on a variety of media, such as stone offering tables, gold and silver hair pins, and pots (inked and inscribed).",
"The clay documents consist of tablets, roundels, and sealings (one-hole, two-hole, and flat-based).",
"Roundels are related to a \"conveyance of a commodity, either within the central administration or between the central administration and an external party\" (Palmer, 1995; Schoep, 2002) .",
"The roundel is the record of this transaction that stays within the central administration as the commodity moves out of the transacting bureau (Hallager, 1996) .",
"Two-hole sealings probably dangled from commodities brought into the center; onehole sealings apparently dangled from papyrus/parchment documents; flat-based sealings (themselves never inscribed) were pressed against the twine that secured papyrus/parchment documents (Younger, 2000g; Schoep, 2002) as shown by photographs (Müller, 1999) , (Müller, 2002) of the imprints that survive on the underside of flat-based sealings.",
"There are 1,427 Linear A documents containing 7,362-7,396 signs, much less than the quantity of data we have for Linear B (more than 4,600 documents containing 57,398 signs) (Younger, 2000f) .",
"Godart and Olivier's Collection of Linear A Inscriptions There is a complete and organized collection of Linear A documents on a paper corpus, the GORILA Louis Godart and Jean-Pierre Olivier, Recueil des inscriptions en Linéaire A (Godart and Olivier, 1976) .",
"Godart and Olivier have indexed the documents by original location and type of support, following the Raison-Pope Index (Raison and Pope, 1971) .",
"For example, the document AP Za 1 is from AP = Apodoulou and the support type is Za = stone vessels as shown in Table 3 .",
"Younger (2000h) provides a map with all the Cretan sites and one with all the Greek non-Cretan sites (Younger, 2000i) .",
"Godart and Olivier also provide referential data about conservation places (mainly museums), and periodization (for example: EM II = Second Early Minoan).",
"Since 1976, this has been the main source of data and point of reference about Linear A documents and it has set up the basis for further studies.",
"Even recent corpora, such as the Corpus transnuméré du linéaire A (Raison and Pope, 1994) , always refer to GORILA precise volumes and pages describing each document.",
"John G. Younger's website Younger (2000j) has published a website that is the best digital resource available (there is another interesting project, never completed, on Yannis Deliyannis's website 2 ).",
"It collects most of the existing inscriptions (taking GORILA as main source of data and point of reference) transcribed as Linear B phonetic values (like the KU-NI-SU transcription above).",
"The transcriptions are kept up to date and a complete restructuring in June 2015 has been announced (Younger, 2000j) .",
"GORILA symbols catalogue Many transcription systems have been defined.",
"The first one has been proposed by Raison and Pope (1971) and uses a string composed by one or two characters (Lm, L or Lc depending on the symbol, respectively metric, phonetic or compound) followed by a number, for example: L2.",
"This system has been widely used by many scholars such as David Woodley Packard (president of the Packard Humanities Institute 3 ), Colin Renfrew and Richard Janko (Packard, 1974; Renfrew, 1977; Janko, 1982) .",
"The second one, used in the GORILA collection (Godart and Olivier, 1976 ) and on John G. Younger's website, consists of a string composed by one or two characters (AB if the symbol is shared by Linear A and Linear B, A if the symbol is only used in Linear A) followed by a number and eventually other alphabetical characters (due to addenda and corrigenda to earlier versions), for example: AB03.",
"Many scholars transcribe the symbols shared by Linear A and B with the assumed phonetical/syllabic transcription.",
"This syllabic transcription is based on the corresponding Linear B phonetic values.",
"Younger (2000a) provides a conversion table of Pope and Raison's transcription system, GO-RILA's transcription system and his own phonetic/syllabic transcription system.",
"Developing our corpus, we worked mainly on Younger's syllabic and GORILA transcriptions, because the Unicode Linear A encoding is broadly based on the GORILA catalogue, which is also the basic set of characters used in decipherment efforts 4 .",
"We provide an example of different transcriptions for the same symbol in Table 4 .",
"As can be noticed, the Unicode encoding is based on the GORILA transcription system.",
"Linear A Font The best Linear A Font available is LA.ttf, released by D.W. Borgdorff 5 in 2004.",
"In this font some arbitrary Unicode positions for Latin characters are mapped to Linear A symbols.",
"On one hand this allows the user to type Linear A symbols directly by pressing the keys on the keyboard; on the other hand, only transliterations can be produced.",
"The text eventually typed internally will be a series of Latin characters.",
"It should be remarked that this font would not be useful to make readable a Linear A corpus that is non-translittered and encoded in Unicode.",
"Unicode Linear A Characters Set On June 16th 2014, Version 7.0 of Unicode standard was released 6 , adding 2,834 new characters and including, finally, the Linear A character set.",
"Linear A block has been set in the range 10600-1077F and the order mainly follows GORILA's one 7 , as seen in Table 4 .",
"This Unicode Set covers simple signs, vase shapes, complex signs, complex signs with vase shapes, fractions and compound fractions.",
"This is a resource that opens, for the first time, the possibility to develop a Linear A digital corpus not consisting of a transliteration or alternative transcription.",
"Corpus data format Many scholars have faced the issues for data curation and considered various possibilities.",
"Among all the possible solutions, we chose to develop the Linear A Digital Corpus as a collection of TEI-EpiDoc XML documents.",
"In this section we explain why.",
"Why Digital?",
"Many epigraphic corpora have begun to be digitalized; there are many reasons to do so.",
"A digital corpus can include several representations of the inscriptions (Mahoney, 2007) : • pictures of the original document; • pictures of drawings or transcriptions made by hand simplifying the document; • diplomatic transcriptions; • edited texts; • translations; • commentaries.",
"Building a database is enough to get much richer features than the ones a paper corpus would provide.",
"The most visible feature of an epigraphic database is its utility as an Index Universalis (Gómez Pantoja and Álvarez, 2011); unlike hand-made indexes, there is no need to constrain the number of available search-keys.",
"Needless to say, the opportunity to have the data available also on the web is valuable.",
"Why Unicode?",
"Text processing must also take into account the writing systems represented in the corpus.",
"If the corpus consists of inscriptions written in the Latin alphabet, then the writing system of the inscriptions is the same as that of the Western European modern languages used for meta-data, translations, and commentaries.",
"In our case, unluckily, we have to deal with Linear A, so we need to find a way to represent our text.",
"Scholars objected to epigraphic databases on the ground of its poor graphic ability to represent non-Latin writing systems (García Barriocanal et al., 2011) .",
"This led to the use of non-standard fonts in some databases which probed to be a bad move, compromising overall compatibility and system upgrading.",
"This approach is appealing because if the corpus needs to be printed, sooner or later fonts will be a need in all cases.",
"The font-based solution assumes that all the software involved can recognize font-change markers.",
"Unluckily, some Database Management Systems (DMSs) do not allow changes of font within a text field and some export or interchange formats lose font information.",
"When the scripts of the corpus are all supported, which will be the case for any script still used by a living language, Unicode is a better approach.",
"Despite Minoan not being a living language, Linear A is finally part of the Unicode 7.0 Character Code Charts 8 but some sign groups conventionally interpreted as numbers have no Unicode representation.",
"Why XML?",
"Until not so long ago, markup systems have always involved special typographical symbols in the text-brackets, underdots, and so on.",
"Some epigraphers see XML as a natural transformation of what they have always done, with all the additional benefits that come from standardization within the community.",
"There is a growing consensus that XML is the best way to encode text.",
"Some corpora may also use the typographical marks of the Leiden system, which has the advan-<glyph xml:id=\"n5\"> <glyphName> Number 5 </glyphName> <mapping type=\"standardized\"> 5 </mapping> </glyph> tage of being entirely familiar to the epigraphers who create and maintain the corpus.",
"Unfortunately, the special brackets, underdots, and other typographical devices may not be supported by the character set of the computer system to be used.",
"A key incentive for using XML is the ability to exchange data with other projects.",
"It is convenient to be able to divide the information in many layers: cataloging, annotating, commenting and editing the inscriptions.",
"In some cases, merging different layers from different projects could be a need (for example when each of these projects is focused on a specific layer, for which provides the best quality), as a consequence the resulting data should be in compatible forms.",
"If the projects use the same Document Type Definition (DTD), in the same way, this is relatively easy.",
"While corpora that store their texts as wordprocessor files with Leiden markup can also share data, they must agree explicitly on the details of text layout, file formats, and character encodings.",
"With XML, it is possible to define either elements or entities for unsupported characters.",
"This feature is particularly interesting in our case, giving a solution for the numbers representation (Linear A numbers, except for fractions, have no Unicode representation).",
"Suppose you want to mark up the sign group , conventionally interpreted as the number 5, in the XML.",
"As specified in the TEI DTD, this could be expressed as <g ref=\"#n5\"/>, where the element g indicates a glyph, or a non-standard character and the attribute value points to the element glyph, which contains information about the specific glyph.",
"An example is given in Figure 1 .",
"Alternatively, the project might define an entity to represent this character.",
"Either way, the XML text notes that there is the Linear A number 5, and the later rendering of the text for display or printing can substitute the appropriate character in a known font, a picture of the character, or even a numeral from a different system.",
"Such approaches assume that tools are available for these conversions; some application, transformation, or stylesheet must have a way to know how to interpret the given element or entity.",
"The usage of XML provides two advantages: in first place, it makes possible the encoding of the characters that occur in the text (as shown above); in second place, it's really useful for encoding meta-information.",
"Why EpiDoc?",
"If a project decides to use XML, the most appropriate DTD (or schema) to be used needs to be chosen.",
"As in every other humanities discipline, the basic question is whether to use a general DTD, like the TEI, or to write a project-specific one.",
"Some projects need DTDs that are extremely specific to the types of inscriptions they are dealing with, instead other projects prefer to rely on existing, widely used DTDs.",
"Mahoney (2007) has deeply analyzed all the digitization issues, taking into account all the advantages and disadvantages of different approaches; her conclusion is that it's best to use EpiDoc 9 an XML encoding tool that could be also used to write structured documents compliant with the TEI standard 10 .",
"The EpiDoc DTD is the TEI, with a few epigraphically oriented customizations made using the standard TEI mechanisms.",
"Rather than writing a DTD for epigraphy from scratch, the Epi-Doc group uses the TEI because TEI has already addressed many of the taxonomic and semantic challenges faced by epigraphers, because the TEIusing community can provide a wide range of best-practice examples and guiding expertise, and because existing tooling built around TEI could easily lead to early and effective presentation and use of TEI-encoded epigraphic texts (Mahoney, 2007) .",
"The TEI and EpiDoc approaches have already been adopted by several epigraphic projects (Bodard, 2009 ), such as the Dêmos project (Furman University) and the corpus of Macedonian and Thracian inscriptions being compiled at KERA, the Research Center for Greek and Roman Antiquity at Athens (Mahoney, 2007) .",
"Also other scholars evaluate EpiDoc as a suitable choice.",
"Felle (2011) compares the EAGLE (Electronic Archive of Greek and Latin Epigraphy 11 ) project with the EpiDoc existing resources, viewing these resources as different but complementary.",
"Álvarez et al.",
"(2010) and Gómez Pantoja and Álvarez (2011) discuss the possibility of sharing Epigraphic Information as EpiDoc-based Linked Data and describe how they implemented a relational-to-linked data solution for the Hispania Epigraphica database.",
"Cayless (2003) evaluates EpiDoc as a relevant digital tool for Epigraphy allowing for a uniform representation of epigraphic metadata.",
"The EpiDoc guidelines are emerging as one standard for digital epigraphy with the TEI.",
"EpiDoc is not the only possible way to use the TEI for epigraphic texts but the tools, documentation, and examples 12 make it a good environment for new digitization projects as ours.",
"EpiDoc structure An EpiDoc document is structured as a standard TEI document with the teiHeader element including some initial Desc sections (fileDesc, encodingDesc, profileDesc, revisionDesc, etc) containing metadata, general information and descriptions (here we annotated place, period, kind of support and specific objects/fragments IDs).",
"An interesting use of encodingDesc is shown in Figure 1 above: the gliph element has to be defined inside its parent element charDecl and its grandparent element encodingDesc.",
"The teiHeader element is followed by the text element including the body element composed by a series of unnumbered <div>s, distinguished by their type attributes (we show an example of the Epidoc <div> element in Figure 2 ).",
"Typical divisions might include: • text itself (type=\"edition\"); • translation (type=\"translation\"); 11 http://www.eagle-eagle.it/ 12 http://wiki.tei-c.org/index.php/ Samples_of_TEI_texts • description (type=\"description\"; • commentary (type=\"commentary\"); • historical information(type=\"history\"); • bibliography (type=\"bibliography\").",
"<div lang=\"minoan\" n=\"text\" type=\"edition\" part=\"N\" sample=\"complete\" org=\"uniform\"> <head lang=\"eng\">Edition</head> <cb rend=\"front\" n=\"HM 1673\"/> <ab part=\"N\"> <lb n=\"1\"/> <w part=\"N\"> </w> <space dim=\"horizontal\" extent=\"1em\" unit=\"character\"/> <w part=\"N\"> </w> <lb n=\"2\"/> <w part=\"N\"> </w> <g ref=\"#n5\"/> <w part=\"N\"> </w> <lb n=\"3\"/> <w part=\"N\"> </w> <g ref=\"#n12\"/> <w part=\"N\"> </w> <lb n=\"4\"/> <w part=\"N\"> </w> <g ref=\"#n6\"/> <lb n=\"5\"/> <w part=\"N\"> </w> <lb n=\"6\"/> <g ref=\"#n4\"/> <w part=\"N\"> </w> <supplied reason=\"damage\"> </supplied> <gap extent=\"2em\" reason=\"lost\" unit=\"character\" dim=\"right\"/> </ab> </div> The EpiDoc DTD introduces a finite set of possible values for the type of a <div>, so that there is a clear distinction between sections covering different aspects, such as the commentary, the description or the archaeological history.",
"One advantage of structured markup is that editors can encode more information about how certain a particular feature is.",
"The date of an inscription, for example, can be encoded as a range of possible dates.",
"EpiDoc includes the TEI <certainty> element and the cert attribute to encourage editors to say whether or not they are completely confident of a given reading.",
"After some discussion, the EpiDoc community (Mahoney, 2007) decided that certainty should be expressed as a yes-or-no value: either the editor is certain of the reading, or not.",
"Gradual certainty is too complicated to manage and is best explained in the commentary.",
"Developing the Linear A Corpus The hope that computational approaches could help decipher Linear A, along with the evident lack of rich digital resources in this field, led us to develop this new resource.",
"In this section we describe which issues we faced and which solving strategies we used.",
"Data Collection Luckily the existence of Younger's website and GORILA volumes, together with the Raison-Pope Index, made possible a semi-automatic collection process, starting from syllabic transcriptions taken from Younger's website (with his permission), converting them in Unicode strings through Python scripts and acquiring all the metadata provided in Younger's transcriptions (location and support IDs, conservation place, periodization etc.).",
"Younger's resources on his website consist of two HTML pages, one containing inscriptions from Haghia Triada (that is the richest location in terms of documents found there) (Younger, 2000k) and the other containing documents from all the other locations (Younger, 2000l ).",
"Younger's transcriptions are well enriched with metadata.",
"The metadata convey the same information found in GORILA, including the Raison-Pope Index, plus some additional description of the support (this was not necessary in GORILA volumes, where the transcriptions are shown just next to the documents pictures) and the reference to the specific GORILA volume and pages.",
"Segmentation Issues When working on ancient writing systems, segmentation issues are expected to come up.",
"John G. Younger explains (Younger, 2000c ) that in Linear A separation is mainly indicated in two ways: first, by associating sign groups with numbers or logograms, thereby implying a separation; second, by placing a dot between two sign groups, thereby explicitly separating the sign groups or between a sign group and some other sign like a transaction sign or a logogram.",
"Younger also explains that in texts that employ a string of sign groups, dots are used to separate them and this practice is most notable on non-bureaucratic texts and especially in religious texts.",
"On his website, Younger also covers the hyphenization issue (Younger, 2000d) , explaining that in some cases we find a split across lines and the reason may involve separating prefixes from base words (the root of a sign group) or base words from their suffixes.",
"As Younger points out, this hypothesis would require evidence showing that affixes are involved.",
"The hyphenization issue is more complex to solve because a 'neutral' resource should avoid transcriptions implying a well known segmentation for Linear A sign groups.",
"In Younger's transcriptions, split sign groups are reunified in order to make it clearer when a known sign group is there.",
"Instead, our digital collection keeps the text as it is on the document, all the information about interpretations of such kind can be stored separately.",
"Obtaining Unicode transcriptions We managed to obtain Unicode encoded transcriptions by automatically converting Younger's phonetic transcriptions to GORILA transcriptions (manually checked against GORILA volumes) and then by automatically converting GORILA transcriptions to Unicode codes and printing them as Unicode characters (UTF-8 encoding).",
"In order to create the syllables-to-GORILA and the GORILA-to-Unicode dictionaries, we took into account Younger's conversion table mentioned in Subsection 2.4 and the official Unicode documentation (containing explicit Unicode-to-GORILA mapping information).",
"All these processing steps have been implemented through Python scripts.",
"XML annotation Once collected the whole corpus encoded in Unicode, we automatically added part of the XML annotation through a python script.",
"These documents have been later manually corrected and completed, checking against GORILA volumes.",
"A new Linear A font Before the Unicode 7.0 release, there was no way to visualize Unicode characters in the range 10600-1077F.",
"Even now, systems that are not updated may have trouble to visualize those characters.",
"Some implementations for Unicode support in certain contexts (for example for L A T E X's output) are not always up-to-date, so it is not obvious that the fonts for the most recent characters sets are available.",
"We decided to develop a new Linear A font, solving the main issue found in LA.ttf (wrong Unicode positions).",
"Starting from the official Unicode documentation, we created a set of symbols graphically similar to the official ones and aligned them to the right Unicode positions.",
"We decided to name the font John_Younger.ttf to show our appreciation for Younger's work.",
"He made the results of GORILA available to a wider public on digital media; this is the same goal we want to pursue by developing and distributing this font.",
"We released the font file at the following URL: http://openfontlibrary.",
"org/en/font/john-younger.",
"The Linear A Digital Corpus as cultural resource As stated by European Commission (2015) and UNESCO (2003) , the meaning of the notion of cultural heritage does not apply just to material objects and works of art, but also to 'intangible cultural heritage', as traditions and creative expressions.",
"In this perspective, linguistic corpora fit perfectly this definition; in fact, they contain information about tradition, knowledge and lifestyle of a certain culture.",
"Despite the fact that the Minoan language has not yet been deciphered, we know that the Linear A corpus provides interesting information concerning economy, commerce and religion.",
"As mentioned in Subsection 2.1, Schoep (2002) made a critical assessment of the Linear A tablets and their role in the administrative process, studying the physical supports.",
"Ruth Palmer (1995) made a deep study of commodities distributions (listing precise quantities and places) among Minoan centers, even without a full understanding of documents contents.",
"As Palmer points out, 'the ideograms for basic commodities, and the formats of the Linear A texts are similar enough to their Linear B counterparts to allow valid comparison of the types and amounts of commodities which appear in specific contexts'.",
"So, it's possible to have 'an idea of the focus of the economy' and of 'the scale and complexity of the transactions'.",
"From the linear A tablets, we can infer information about the resources management and administration system of Minoan centers.",
"Van den Kerkhof and Rem (2007) analyzed the Minoan libation formulas: religious inscriptions on cups, ladles and tables that were used in the offerings of oil and other powerful drinks at dawn.",
"The priestesses that carried out the Minoan libation ritual used all kinds of utensils, and they often inscribed their sacred formulas onto these objects.",
"Around thirty of these texts have survived (whole or in part) on libation tables, ladles and vases, written in various kinds of handwriting.",
"Transcripts of these religious inscriptions are available from Consani et al.",
"(1999) and from John G. Younger (2000m) on his website.",
"As noticed by Duhoux (1989) the Minoan libation formulas have a fixed structure with variable elements.",
"In fact, some studies (Davis, 2014) about Minoan syntax have been made by observing the sign groups order found in these regular formulas.",
"More importantly, the presence of olive-like ideograms could tell us that the Minoans used olive oil for libation (Van den Kerkhof and Rem, 2007) .",
"Beyond all these parts of the Minoan cultural heritage already available, a huge part is preserved there too: the Minoan language, with its hidden stories reflecting the life of a civilization.",
"We hope that our contribution can be useful to the community and that the Minoan, in its digital form, may finally be deciphered through computational approaches.",
"Future Work We are working on XSL style sheets in order to create suitable HTML pages.",
"All the data will be freely available and published at the following URL: http://ling.ied.edu.HK/ gregoire/lineara.",
"A further step will be developing a web interface to annotate, and dynamically enrich the corpus information."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"2.3",
"2.4",
"2.5",
"2.6",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"3.5",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"4.5",
"5",
"6"
],
"paper_header_content": [
"Introduction to Linear A and Minoan",
"Linear A available resources",
"Linear A documents",
"Godart and Olivier's Collection of Linear A Inscriptions",
"John G. Younger's website",
"GORILA symbols catalogue",
"Linear A Font",
"Unicode Linear A Characters Set",
"Corpus data format",
"Why Digital?",
"Why Unicode?",
"Why XML?",
"Why EpiDoc?",
"EpiDoc structure",
"Developing the Linear A Corpus",
"Data Collection",
"Segmentation Issues",
"Obtaining Unicode transcriptions",
"XML annotation",
"A new Linear A font",
"The Linear A Digital Corpus as cultural resource",
"Future Work"
]
} | GEM-SciDuet-train-29#paper-1038#slide-5 | John G Youngers website | two HTML pages, one for Haghia Triadas documents, one for all the other places of origin
1,077 transcriptions, with Linear B phonetics and GORILA code numbers (75.5% of the total amount of existing documents listed in
GORILA) a conversion table: GORILA code numbers to syllables
Petrolito, Winterstein, Perono Cacciafoco Linear A Corpus 30 July 2015 | two HTML pages, one for Haghia Triadas documents, one for all the other places of origin
1,077 transcriptions, with Linear B phonetics and GORILA code numbers (75.5% of the total amount of existing documents listed in
GORILA) a conversion table: GORILA code numbers to syllables
Petrolito, Winterstein, Perono Cacciafoco Linear A Corpus 30 July 2015 | [] |
GEM-SciDuet-train-29#paper-1038#slide-6 | 1038 | Minoan linguistic resources: The Linear A Digital Corpus | This paper describes the Linear A/Minoan digital corpus and the approaches we applied to develop it. We aim to set up a suitable study resource for Linear A and Minoan. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168
],
"paper_content_text": [
"Firstly we start by introducing Linear A and Minoan in order to make it clear why we should develop a digital marked up corpus of the existing Linear A transcriptions.",
"Secondly we list and describe some of the existing resources about Linear A: Linear A documents (seals, statuettes, vessels etc.",
"), the traditional encoding systems (standard code numbers referring to distinct symbols), a Linear A font, and the newest (released on June 16th 2014) Unicode Standard Characters set for Linear A. Thirdly we explain our choice concerning the data format: why we decided to digitize the Linear A resources; why we decided to convert all the transcriptions in standard Unicode characters; why we decided to use an XML format; why we decided to implement the TEI-EpiDoc DTD.",
"Lastly we describe: the developing process (from the data collection to the issues we faced and the solving strategies); a new font we developed (synchronized with the Unicode Characters Set) in order to make the data readable even on systems that are not updated.",
"Finally, we discuss the corpus we developed in a Cultural Heritage preservation perspective and suggest some future works.",
"Introduction to Linear A and Minoan Linear A is the script used by the Minoan Civilization (Cotterell, 1980) from 2500 to 1450 BC.",
"Writing system Time span Cretan Hieroglyphic 2100 -1700 BC Linear A 2500 -1450 BC Linear B 1450 -1200 BC The Minoan Civilization arose on the island of Crete in the Aegean Sea during the Bronze Age.",
"Minoan ruins and artifacts have been found mainly in Crete but also in other Greek islands and in mainland Greece, in Bulgaria, in Turkey and in Israel.",
"Linear A is not used anymore and, even after decades of studies (it was discovered by Sir Arthur Evans around 1900 (Evans, 1909) ), it still remains undeciphered.",
"All the assumptions and hypotheses made about Linear A and Minoan (its underlying language) are mainly based on the comparison with the well known Linear B, the famous child system originated by Linear A.",
"In fact, Linear B was fully deciphered during the 1950s by Michael Ventris 1 and was found to encode an ancient Greek dialect used by the Mycenaean civilization.",
"Archaeologist Arthur Evans named the script 'Linear' because it consisted just of lines inscribed in clay (Robinson, 2009) There have been several attempts to decipher Linear A and the Minoan Language.",
"We can divide the underlying hypotheses in six groups: Greek-like language (Nagy, 1963) , distinct Indo-European branch (Owens, 1999) , Anatolian language close to Luwian (Palmer, 1958) , archaic form of Phoenician (Dietrich and Loretz, 2001) , Indo-Iranian (Faure, 1998) and Etruscan-like language (Giulio M. Facchetti and Negri, 2003) .",
"There is also an interesting attempt (Younger, 2000b) to decipher single words, specifically toponyms, by applying Linear B phonetic values to the symbols shared by both Linear A and Linear B and following the assumption that toponyms are much more likely to survive as loans in Mycenaean Greek (written in Linear B); we show an example of this approach in Table 2.",
"In the next sections we describe the available existing resources concerning Linear A and the Linear A Digital Corpus: why and how we developed it.",
"Linear A available resources Even if Linear A and Linear B were discovered more than one century ago, Linear A has not been deciphered yet.",
"Nevertheless, many scholars worked on collecting and organizing all the available data in order to study and to decipher the script and the language.",
"Probably due to the fact that only historical linguists, philologists and archaeologists attempted to collect and organize all the existing data, nowadays a rich and well organized digital corpus is still not available.",
"In this section we describe all the available Linear A resources, including both physical documents and digital data.",
"Table 3 : Indexed types of support (Younger, 2000e) .",
"Linear A documents Linear A was written on a variety of media, such as stone offering tables, gold and silver hair pins, and pots (inked and inscribed).",
"The clay documents consist of tablets, roundels, and sealings (one-hole, two-hole, and flat-based).",
"Roundels are related to a \"conveyance of a commodity, either within the central administration or between the central administration and an external party\" (Palmer, 1995; Schoep, 2002) .",
"The roundel is the record of this transaction that stays within the central administration as the commodity moves out of the transacting bureau (Hallager, 1996) .",
"Two-hole sealings probably dangled from commodities brought into the center; onehole sealings apparently dangled from papyrus/parchment documents; flat-based sealings (themselves never inscribed) were pressed against the twine that secured papyrus/parchment documents (Younger, 2000g; Schoep, 2002) as shown by photographs (Müller, 1999) , (Müller, 2002) of the imprints that survive on the underside of flat-based sealings.",
"There are 1,427 Linear A documents containing 7,362-7,396 signs, much less than the quantity of data we have for Linear B (more than 4,600 documents containing 57,398 signs) (Younger, 2000f) .",
"Godart and Olivier's Collection of Linear A Inscriptions There is a complete and organized collection of Linear A documents on a paper corpus, the GORILA Louis Godart and Jean-Pierre Olivier, Recueil des inscriptions en Linéaire A (Godart and Olivier, 1976) .",
"Godart and Olivier have indexed the documents by original location and type of support, following the Raison-Pope Index (Raison and Pope, 1971) .",
"For example, the document AP Za 1 is from AP = Apodoulou and the support type is Za = stone vessels as shown in Table 3 .",
"Younger (2000h) provides a map with all the Cretan sites and one with all the Greek non-Cretan sites (Younger, 2000i) .",
"Godart and Olivier also provide referential data about conservation places (mainly museums), and periodization (for example: EM II = Second Early Minoan).",
"Since 1976, this has been the main source of data and point of reference about Linear A documents and it has set up the basis for further studies.",
"Even recent corpora, such as the Corpus transnuméré du linéaire A (Raison and Pope, 1994) , always refer to GORILA precise volumes and pages describing each document.",
"John G. Younger's website Younger (2000j) has published a website that is the best digital resource available (there is another interesting project, never completed, on Yannis Deliyannis's website 2 ).",
"It collects most of the existing inscriptions (taking GORILA as main source of data and point of reference) transcribed as Linear B phonetic values (like the KU-NI-SU transcription above).",
"The transcriptions are kept up to date and a complete restructuring in June 2015 has been announced (Younger, 2000j) .",
"GORILA symbols catalogue Many transcription systems have been defined.",
"The first one has been proposed by Raison and Pope (1971) and uses a string composed by one or two characters (Lm, L or Lc depending on the symbol, respectively metric, phonetic or compound) followed by a number, for example: L2.",
"This system has been widely used by many scholars such as David Woodley Packard (president of the Packard Humanities Institute 3 ), Colin Renfrew and Richard Janko (Packard, 1974; Renfrew, 1977; Janko, 1982) .",
"The second one, used in the GORILA collection (Godart and Olivier, 1976 ) and on John G. Younger's website, consists of a string composed by one or two characters (AB if the symbol is shared by Linear A and Linear B, A if the symbol is only used in Linear A) followed by a number and eventually other alphabetical characters (due to addenda and corrigenda to earlier versions), for example: AB03.",
"Many scholars transcribe the symbols shared by Linear A and B with the assumed phonetical/syllabic transcription.",
"This syllabic transcription is based on the corresponding Linear B phonetic values.",
"Younger (2000a) provides a conversion table of Pope and Raison's transcription system, GO-RILA's transcription system and his own phonetic/syllabic transcription system.",
"Developing our corpus, we worked mainly on Younger's syllabic and GORILA transcriptions, because the Unicode Linear A encoding is broadly based on the GORILA catalogue, which is also the basic set of characters used in decipherment efforts 4 .",
"We provide an example of different transcriptions for the same symbol in Table 4 .",
"As can be noticed, the Unicode encoding is based on the GORILA transcription system.",
"Linear A Font The best Linear A Font available is LA.ttf, released by D.W. Borgdorff 5 in 2004.",
"In this font some arbitrary Unicode positions for Latin characters are mapped to Linear A symbols.",
"On one hand this allows the user to type Linear A symbols directly by pressing the keys on the keyboard; on the other hand, only transliterations can be produced.",
"The text eventually typed internally will be a series of Latin characters.",
"It should be remarked that this font would not be useful to make readable a Linear A corpus that is non-translittered and encoded in Unicode.",
"Unicode Linear A Characters Set On June 16th 2014, Version 7.0 of Unicode standard was released 6 , adding 2,834 new characters and including, finally, the Linear A character set.",
"Linear A block has been set in the range 10600-1077F and the order mainly follows GORILA's one 7 , as seen in Table 4 .",
"This Unicode Set covers simple signs, vase shapes, complex signs, complex signs with vase shapes, fractions and compound fractions.",
"This is a resource that opens, for the first time, the possibility to develop a Linear A digital corpus not consisting of a transliteration or alternative transcription.",
"Corpus data format Many scholars have faced the issues for data curation and considered various possibilities.",
"Among all the possible solutions, we chose to develop the Linear A Digital Corpus as a collection of TEI-EpiDoc XML documents.",
"In this section we explain why.",
"Why Digital?",
"Many epigraphic corpora have begun to be digitalized; there are many reasons to do so.",
"A digital corpus can include several representations of the inscriptions (Mahoney, 2007) : • pictures of the original document; • pictures of drawings or transcriptions made by hand simplifying the document; • diplomatic transcriptions; • edited texts; • translations; • commentaries.",
"Building a database is enough to get much richer features than the ones a paper corpus would provide.",
"The most visible feature of an epigraphic database is its utility as an Index Universalis (Gómez Pantoja and Álvarez, 2011); unlike hand-made indexes, there is no need to constrain the number of available search-keys.",
"Needless to say, the opportunity to have the data available also on the web is valuable.",
"Why Unicode?",
"Text processing must also take into account the writing systems represented in the corpus.",
"If the corpus consists of inscriptions written in the Latin alphabet, then the writing system of the inscriptions is the same as that of the Western European modern languages used for meta-data, translations, and commentaries.",
"In our case, unluckily, we have to deal with Linear A, so we need to find a way to represent our text.",
"Scholars objected to epigraphic databases on the ground of its poor graphic ability to represent non-Latin writing systems (García Barriocanal et al., 2011) .",
"This led to the use of non-standard fonts in some databases which probed to be a bad move, compromising overall compatibility and system upgrading.",
"This approach is appealing because if the corpus needs to be printed, sooner or later fonts will be a need in all cases.",
"The font-based solution assumes that all the software involved can recognize font-change markers.",
"Unluckily, some Database Management Systems (DMSs) do not allow changes of font within a text field and some export or interchange formats lose font information.",
"When the scripts of the corpus are all supported, which will be the case for any script still used by a living language, Unicode is a better approach.",
"Despite Minoan not being a living language, Linear A is finally part of the Unicode 7.0 Character Code Charts 8 but some sign groups conventionally interpreted as numbers have no Unicode representation.",
"Why XML?",
"Until not so long ago, markup systems have always involved special typographical symbols in the text-brackets, underdots, and so on.",
"Some epigraphers see XML as a natural transformation of what they have always done, with all the additional benefits that come from standardization within the community.",
"There is a growing consensus that XML is the best way to encode text.",
"Some corpora may also use the typographical marks of the Leiden system, which has the advan-<glyph xml:id=\"n5\"> <glyphName> Number 5 </glyphName> <mapping type=\"standardized\"> 5 </mapping> </glyph> tage of being entirely familiar to the epigraphers who create and maintain the corpus.",
"Unfortunately, the special brackets, underdots, and other typographical devices may not be supported by the character set of the computer system to be used.",
"A key incentive for using XML is the ability to exchange data with other projects.",
"It is convenient to be able to divide the information in many layers: cataloging, annotating, commenting and editing the inscriptions.",
"In some cases, merging different layers from different projects could be a need (for example when each of these projects is focused on a specific layer, for which provides the best quality), as a consequence the resulting data should be in compatible forms.",
"If the projects use the same Document Type Definition (DTD), in the same way, this is relatively easy.",
"While corpora that store their texts as wordprocessor files with Leiden markup can also share data, they must agree explicitly on the details of text layout, file formats, and character encodings.",
"With XML, it is possible to define either elements or entities for unsupported characters.",
"This feature is particularly interesting in our case, giving a solution for the numbers representation (Linear A numbers, except for fractions, have no Unicode representation).",
"Suppose you want to mark up the sign group , conventionally interpreted as the number 5, in the XML.",
"As specified in the TEI DTD, this could be expressed as <g ref=\"#n5\"/>, where the element g indicates a glyph, or a non-standard character and the attribute value points to the element glyph, which contains information about the specific glyph.",
"An example is given in Figure 1 .",
"Alternatively, the project might define an entity to represent this character.",
"Either way, the XML text notes that there is the Linear A number 5, and the later rendering of the text for display or printing can substitute the appropriate character in a known font, a picture of the character, or even a numeral from a different system.",
"Such approaches assume that tools are available for these conversions; some application, transformation, or stylesheet must have a way to know how to interpret the given element or entity.",
"The usage of XML provides two advantages: in first place, it makes possible the encoding of the characters that occur in the text (as shown above); in second place, it's really useful for encoding meta-information.",
"Why EpiDoc?",
"If a project decides to use XML, the most appropriate DTD (or schema) to be used needs to be chosen.",
"As in every other humanities discipline, the basic question is whether to use a general DTD, like the TEI, or to write a project-specific one.",
"Some projects need DTDs that are extremely specific to the types of inscriptions they are dealing with, instead other projects prefer to rely on existing, widely used DTDs.",
"Mahoney (2007) has deeply analyzed all the digitization issues, taking into account all the advantages and disadvantages of different approaches; her conclusion is that it's best to use EpiDoc 9 an XML encoding tool that could be also used to write structured documents compliant with the TEI standard 10 .",
"The EpiDoc DTD is the TEI, with a few epigraphically oriented customizations made using the standard TEI mechanisms.",
"Rather than writing a DTD for epigraphy from scratch, the Epi-Doc group uses the TEI because TEI has already addressed many of the taxonomic and semantic challenges faced by epigraphers, because the TEIusing community can provide a wide range of best-practice examples and guiding expertise, and because existing tooling built around TEI could easily lead to early and effective presentation and use of TEI-encoded epigraphic texts (Mahoney, 2007) .",
"The TEI and EpiDoc approaches have already been adopted by several epigraphic projects (Bodard, 2009 ), such as the Dêmos project (Furman University) and the corpus of Macedonian and Thracian inscriptions being compiled at KERA, the Research Center for Greek and Roman Antiquity at Athens (Mahoney, 2007) .",
"Also other scholars evaluate EpiDoc as a suitable choice.",
"Felle (2011) compares the EAGLE (Electronic Archive of Greek and Latin Epigraphy 11 ) project with the EpiDoc existing resources, viewing these resources as different but complementary.",
"Álvarez et al.",
"(2010) and Gómez Pantoja and Álvarez (2011) discuss the possibility of sharing Epigraphic Information as EpiDoc-based Linked Data and describe how they implemented a relational-to-linked data solution for the Hispania Epigraphica database.",
"Cayless (2003) evaluates EpiDoc as a relevant digital tool for Epigraphy allowing for a uniform representation of epigraphic metadata.",
"The EpiDoc guidelines are emerging as one standard for digital epigraphy with the TEI.",
"EpiDoc is not the only possible way to use the TEI for epigraphic texts but the tools, documentation, and examples 12 make it a good environment for new digitization projects as ours.",
"EpiDoc structure An EpiDoc document is structured as a standard TEI document with the teiHeader element including some initial Desc sections (fileDesc, encodingDesc, profileDesc, revisionDesc, etc) containing metadata, general information and descriptions (here we annotated place, period, kind of support and specific objects/fragments IDs).",
"An interesting use of encodingDesc is shown in Figure 1 above: the gliph element has to be defined inside its parent element charDecl and its grandparent element encodingDesc.",
"The teiHeader element is followed by the text element including the body element composed by a series of unnumbered <div>s, distinguished by their type attributes (we show an example of the Epidoc <div> element in Figure 2 ).",
"Typical divisions might include: • text itself (type=\"edition\"); • translation (type=\"translation\"); 11 http://www.eagle-eagle.it/ 12 http://wiki.tei-c.org/index.php/ Samples_of_TEI_texts • description (type=\"description\"; • commentary (type=\"commentary\"); • historical information(type=\"history\"); • bibliography (type=\"bibliography\").",
"<div lang=\"minoan\" n=\"text\" type=\"edition\" part=\"N\" sample=\"complete\" org=\"uniform\"> <head lang=\"eng\">Edition</head> <cb rend=\"front\" n=\"HM 1673\"/> <ab part=\"N\"> <lb n=\"1\"/> <w part=\"N\"> </w> <space dim=\"horizontal\" extent=\"1em\" unit=\"character\"/> <w part=\"N\"> </w> <lb n=\"2\"/> <w part=\"N\"> </w> <g ref=\"#n5\"/> <w part=\"N\"> </w> <lb n=\"3\"/> <w part=\"N\"> </w> <g ref=\"#n12\"/> <w part=\"N\"> </w> <lb n=\"4\"/> <w part=\"N\"> </w> <g ref=\"#n6\"/> <lb n=\"5\"/> <w part=\"N\"> </w> <lb n=\"6\"/> <g ref=\"#n4\"/> <w part=\"N\"> </w> <supplied reason=\"damage\"> </supplied> <gap extent=\"2em\" reason=\"lost\" unit=\"character\" dim=\"right\"/> </ab> </div> The EpiDoc DTD introduces a finite set of possible values for the type of a <div>, so that there is a clear distinction between sections covering different aspects, such as the commentary, the description or the archaeological history.",
"One advantage of structured markup is that editors can encode more information about how certain a particular feature is.",
"The date of an inscription, for example, can be encoded as a range of possible dates.",
"EpiDoc includes the TEI <certainty> element and the cert attribute to encourage editors to say whether or not they are completely confident of a given reading.",
"After some discussion, the EpiDoc community (Mahoney, 2007) decided that certainty should be expressed as a yes-or-no value: either the editor is certain of the reading, or not.",
"Gradual certainty is too complicated to manage and is best explained in the commentary.",
"Developing the Linear A Corpus The hope that computational approaches could help decipher Linear A, along with the evident lack of rich digital resources in this field, led us to develop this new resource.",
"In this section we describe which issues we faced and which solving strategies we used.",
"Data Collection Luckily the existence of Younger's website and GORILA volumes, together with the Raison-Pope Index, made possible a semi-automatic collection process, starting from syllabic transcriptions taken from Younger's website (with his permission), converting them in Unicode strings through Python scripts and acquiring all the metadata provided in Younger's transcriptions (location and support IDs, conservation place, periodization etc.).",
"Younger's resources on his website consist of two HTML pages, one containing inscriptions from Haghia Triada (that is the richest location in terms of documents found there) (Younger, 2000k) and the other containing documents from all the other locations (Younger, 2000l ).",
"Younger's transcriptions are well enriched with metadata.",
"The metadata convey the same information found in GORILA, including the Raison-Pope Index, plus some additional description of the support (this was not necessary in GORILA volumes, where the transcriptions are shown just next to the documents pictures) and the reference to the specific GORILA volume and pages.",
"Segmentation Issues When working on ancient writing systems, segmentation issues are expected to come up.",
"John G. Younger explains (Younger, 2000c ) that in Linear A separation is mainly indicated in two ways: first, by associating sign groups with numbers or logograms, thereby implying a separation; second, by placing a dot between two sign groups, thereby explicitly separating the sign groups or between a sign group and some other sign like a transaction sign or a logogram.",
"Younger also explains that in texts that employ a string of sign groups, dots are used to separate them and this practice is most notable on non-bureaucratic texts and especially in religious texts.",
"On his website, Younger also covers the hyphenization issue (Younger, 2000d) , explaining that in some cases we find a split across lines and the reason may involve separating prefixes from base words (the root of a sign group) or base words from their suffixes.",
"As Younger points out, this hypothesis would require evidence showing that affixes are involved.",
"The hyphenization issue is more complex to solve because a 'neutral' resource should avoid transcriptions implying a well known segmentation for Linear A sign groups.",
"In Younger's transcriptions, split sign groups are reunified in order to make it clearer when a known sign group is there.",
"Instead, our digital collection keeps the text as it is on the document, all the information about interpretations of such kind can be stored separately.",
"Obtaining Unicode transcriptions We managed to obtain Unicode encoded transcriptions by automatically converting Younger's phonetic transcriptions to GORILA transcriptions (manually checked against GORILA volumes) and then by automatically converting GORILA transcriptions to Unicode codes and printing them as Unicode characters (UTF-8 encoding).",
"In order to create the syllables-to-GORILA and the GORILA-to-Unicode dictionaries, we took into account Younger's conversion table mentioned in Subsection 2.4 and the official Unicode documentation (containing explicit Unicode-to-GORILA mapping information).",
"All these processing steps have been implemented through Python scripts.",
"XML annotation Once collected the whole corpus encoded in Unicode, we automatically added part of the XML annotation through a python script.",
"These documents have been later manually corrected and completed, checking against GORILA volumes.",
"A new Linear A font Before the Unicode 7.0 release, there was no way to visualize Unicode characters in the range 10600-1077F.",
"Even now, systems that are not updated may have trouble to visualize those characters.",
"Some implementations for Unicode support in certain contexts (for example for L A T E X's output) are not always up-to-date, so it is not obvious that the fonts for the most recent characters sets are available.",
"We decided to develop a new Linear A font, solving the main issue found in LA.ttf (wrong Unicode positions).",
"Starting from the official Unicode documentation, we created a set of symbols graphically similar to the official ones and aligned them to the right Unicode positions.",
"We decided to name the font John_Younger.ttf to show our appreciation for Younger's work.",
"He made the results of GORILA available to a wider public on digital media; this is the same goal we want to pursue by developing and distributing this font.",
"We released the font file at the following URL: http://openfontlibrary.",
"org/en/font/john-younger.",
"The Linear A Digital Corpus as cultural resource As stated by European Commission (2015) and UNESCO (2003) , the meaning of the notion of cultural heritage does not apply just to material objects and works of art, but also to 'intangible cultural heritage', as traditions and creative expressions.",
"In this perspective, linguistic corpora fit perfectly this definition; in fact, they contain information about tradition, knowledge and lifestyle of a certain culture.",
"Despite the fact that the Minoan language has not yet been deciphered, we know that the Linear A corpus provides interesting information concerning economy, commerce and religion.",
"As mentioned in Subsection 2.1, Schoep (2002) made a critical assessment of the Linear A tablets and their role in the administrative process, studying the physical supports.",
"Ruth Palmer (1995) made a deep study of commodities distributions (listing precise quantities and places) among Minoan centers, even without a full understanding of documents contents.",
"As Palmer points out, 'the ideograms for basic commodities, and the formats of the Linear A texts are similar enough to their Linear B counterparts to allow valid comparison of the types and amounts of commodities which appear in specific contexts'.",
"So, it's possible to have 'an idea of the focus of the economy' and of 'the scale and complexity of the transactions'.",
"From the linear A tablets, we can infer information about the resources management and administration system of Minoan centers.",
"Van den Kerkhof and Rem (2007) analyzed the Minoan libation formulas: religious inscriptions on cups, ladles and tables that were used in the offerings of oil and other powerful drinks at dawn.",
"The priestesses that carried out the Minoan libation ritual used all kinds of utensils, and they often inscribed their sacred formulas onto these objects.",
"Around thirty of these texts have survived (whole or in part) on libation tables, ladles and vases, written in various kinds of handwriting.",
"Transcripts of these religious inscriptions are available from Consani et al.",
"(1999) and from John G. Younger (2000m) on his website.",
"As noticed by Duhoux (1989) the Minoan libation formulas have a fixed structure with variable elements.",
"In fact, some studies (Davis, 2014) about Minoan syntax have been made by observing the sign groups order found in these regular formulas.",
"More importantly, the presence of olive-like ideograms could tell us that the Minoans used olive oil for libation (Van den Kerkhof and Rem, 2007) .",
"Beyond all these parts of the Minoan cultural heritage already available, a huge part is preserved there too: the Minoan language, with its hidden stories reflecting the life of a civilization.",
"We hope that our contribution can be useful to the community and that the Minoan, in its digital form, may finally be deciphered through computational approaches.",
"Future Work We are working on XSL style sheets in order to create suitable HTML pages.",
"All the data will be freely available and published at the following URL: http://ling.ied.edu.HK/ gregoire/lineara.",
"A further step will be developing a web interface to annotate, and dynamically enrich the corpus information."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"2.3",
"2.4",
"2.5",
"2.6",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"3.5",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"4.5",
"5",
"6"
],
"paper_header_content": [
"Introduction to Linear A and Minoan",
"Linear A available resources",
"Linear A documents",
"Godart and Olivier's Collection of Linear A Inscriptions",
"John G. Younger's website",
"GORILA symbols catalogue",
"Linear A Font",
"Unicode Linear A Characters Set",
"Corpus data format",
"Why Digital?",
"Why Unicode?",
"Why XML?",
"Why EpiDoc?",
"EpiDoc structure",
"Developing the Linear A Corpus",
"Data Collection",
"Segmentation Issues",
"Obtaining Unicode transcriptions",
"XML annotation",
"A new Linear A font",
"The Linear A Digital Corpus as cultural resource",
"Future Work"
]
} | GEM-SciDuet-train-29#paper-1038#slide-6 | From Youngers syllables to Unicode | The Unicode set of characters for Linear A was released in June 2014
The 1,077 documents represented on Youngers website have been automatically converted
from the syllable transcription (coexisting alongside GORILA code numbers for symbols not included in Linear B) to the full GORILA code numbers transcription from GORILA code numbers to Unicode
Petrolito, Winterstein, Perono Cacciafoco Linear A Corpus 30 July 2015 | The Unicode set of characters for Linear A was released in June 2014
The 1,077 documents represented on Youngers website have been automatically converted
from the syllable transcription (coexisting alongside GORILA code numbers for symbols not included in Linear B) to the full GORILA code numbers transcription from GORILA code numbers to Unicode
Petrolito, Winterstein, Perono Cacciafoco Linear A Corpus 30 July 2015 | [] |
GEM-SciDuet-train-29#paper-1038#slide-7 | 1038 | Minoan linguistic resources: The Linear A Digital Corpus | This paper describes the Linear A/Minoan digital corpus and the approaches we applied to develop it. We aim to set up a suitable study resource for Linear A and Minoan. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168
],
"paper_content_text": [
"Firstly we start by introducing Linear A and Minoan in order to make it clear why we should develop a digital marked up corpus of the existing Linear A transcriptions.",
"Secondly we list and describe some of the existing resources about Linear A: Linear A documents (seals, statuettes, vessels etc.",
"), the traditional encoding systems (standard code numbers referring to distinct symbols), a Linear A font, and the newest (released on June 16th 2014) Unicode Standard Characters set for Linear A. Thirdly we explain our choice concerning the data format: why we decided to digitize the Linear A resources; why we decided to convert all the transcriptions in standard Unicode characters; why we decided to use an XML format; why we decided to implement the TEI-EpiDoc DTD.",
"Lastly we describe: the developing process (from the data collection to the issues we faced and the solving strategies); a new font we developed (synchronized with the Unicode Characters Set) in order to make the data readable even on systems that are not updated.",
"Finally, we discuss the corpus we developed in a Cultural Heritage preservation perspective and suggest some future works.",
"Introduction to Linear A and Minoan Linear A is the script used by the Minoan Civilization (Cotterell, 1980) from 2500 to 1450 BC.",
"Writing system Time span Cretan Hieroglyphic 2100 -1700 BC Linear A 2500 -1450 BC Linear B 1450 -1200 BC The Minoan Civilization arose on the island of Crete in the Aegean Sea during the Bronze Age.",
"Minoan ruins and artifacts have been found mainly in Crete but also in other Greek islands and in mainland Greece, in Bulgaria, in Turkey and in Israel.",
"Linear A is not used anymore and, even after decades of studies (it was discovered by Sir Arthur Evans around 1900 (Evans, 1909) ), it still remains undeciphered.",
"All the assumptions and hypotheses made about Linear A and Minoan (its underlying language) are mainly based on the comparison with the well known Linear B, the famous child system originated by Linear A.",
"In fact, Linear B was fully deciphered during the 1950s by Michael Ventris 1 and was found to encode an ancient Greek dialect used by the Mycenaean civilization.",
"Archaeologist Arthur Evans named the script 'Linear' because it consisted just of lines inscribed in clay (Robinson, 2009) There have been several attempts to decipher Linear A and the Minoan Language.",
"We can divide the underlying hypotheses in six groups: Greek-like language (Nagy, 1963) , distinct Indo-European branch (Owens, 1999) , Anatolian language close to Luwian (Palmer, 1958) , archaic form of Phoenician (Dietrich and Loretz, 2001) , Indo-Iranian (Faure, 1998) and Etruscan-like language (Giulio M. Facchetti and Negri, 2003) .",
"There is also an interesting attempt (Younger, 2000b) to decipher single words, specifically toponyms, by applying Linear B phonetic values to the symbols shared by both Linear A and Linear B and following the assumption that toponyms are much more likely to survive as loans in Mycenaean Greek (written in Linear B); we show an example of this approach in Table 2.",
"In the next sections we describe the available existing resources concerning Linear A and the Linear A Digital Corpus: why and how we developed it.",
"Linear A available resources Even if Linear A and Linear B were discovered more than one century ago, Linear A has not been deciphered yet.",
"Nevertheless, many scholars worked on collecting and organizing all the available data in order to study and to decipher the script and the language.",
"Probably due to the fact that only historical linguists, philologists and archaeologists attempted to collect and organize all the existing data, nowadays a rich and well organized digital corpus is still not available.",
"In this section we describe all the available Linear A resources, including both physical documents and digital data.",
"Table 3 : Indexed types of support (Younger, 2000e) .",
"Linear A documents Linear A was written on a variety of media, such as stone offering tables, gold and silver hair pins, and pots (inked and inscribed).",
"The clay documents consist of tablets, roundels, and sealings (one-hole, two-hole, and flat-based).",
"Roundels are related to a \"conveyance of a commodity, either within the central administration or between the central administration and an external party\" (Palmer, 1995; Schoep, 2002) .",
"The roundel is the record of this transaction that stays within the central administration as the commodity moves out of the transacting bureau (Hallager, 1996) .",
"Two-hole sealings probably dangled from commodities brought into the center; onehole sealings apparently dangled from papyrus/parchment documents; flat-based sealings (themselves never inscribed) were pressed against the twine that secured papyrus/parchment documents (Younger, 2000g; Schoep, 2002) as shown by photographs (Müller, 1999) , (Müller, 2002) of the imprints that survive on the underside of flat-based sealings.",
"There are 1,427 Linear A documents containing 7,362-7,396 signs, much less than the quantity of data we have for Linear B (more than 4,600 documents containing 57,398 signs) (Younger, 2000f) .",
"Godart and Olivier's Collection of Linear A Inscriptions There is a complete and organized collection of Linear A documents on a paper corpus, the GORILA Louis Godart and Jean-Pierre Olivier, Recueil des inscriptions en Linéaire A (Godart and Olivier, 1976) .",
"Godart and Olivier have indexed the documents by original location and type of support, following the Raison-Pope Index (Raison and Pope, 1971) .",
"For example, the document AP Za 1 is from AP = Apodoulou and the support type is Za = stone vessels as shown in Table 3 .",
"Younger (2000h) provides a map with all the Cretan sites and one with all the Greek non-Cretan sites (Younger, 2000i) .",
"Godart and Olivier also provide referential data about conservation places (mainly museums), and periodization (for example: EM II = Second Early Minoan).",
"Since 1976, this has been the main source of data and point of reference about Linear A documents and it has set up the basis for further studies.",
"Even recent corpora, such as the Corpus transnuméré du linéaire A (Raison and Pope, 1994) , always refer to GORILA precise volumes and pages describing each document.",
"John G. Younger's website Younger (2000j) has published a website that is the best digital resource available (there is another interesting project, never completed, on Yannis Deliyannis's website 2 ).",
"It collects most of the existing inscriptions (taking GORILA as main source of data and point of reference) transcribed as Linear B phonetic values (like the KU-NI-SU transcription above).",
"The transcriptions are kept up to date and a complete restructuring in June 2015 has been announced (Younger, 2000j) .",
"GORILA symbols catalogue Many transcription systems have been defined.",
"The first one has been proposed by Raison and Pope (1971) and uses a string composed by one or two characters (Lm, L or Lc depending on the symbol, respectively metric, phonetic or compound) followed by a number, for example: L2.",
"This system has been widely used by many scholars such as David Woodley Packard (president of the Packard Humanities Institute 3 ), Colin Renfrew and Richard Janko (Packard, 1974; Renfrew, 1977; Janko, 1982) .",
"The second one, used in the GORILA collection (Godart and Olivier, 1976 ) and on John G. Younger's website, consists of a string composed by one or two characters (AB if the symbol is shared by Linear A and Linear B, A if the symbol is only used in Linear A) followed by a number and eventually other alphabetical characters (due to addenda and corrigenda to earlier versions), for example: AB03.",
"Many scholars transcribe the symbols shared by Linear A and B with the assumed phonetical/syllabic transcription.",
"This syllabic transcription is based on the corresponding Linear B phonetic values.",
"Younger (2000a) provides a conversion table of Pope and Raison's transcription system, GO-RILA's transcription system and his own phonetic/syllabic transcription system.",
"Developing our corpus, we worked mainly on Younger's syllabic and GORILA transcriptions, because the Unicode Linear A encoding is broadly based on the GORILA catalogue, which is also the basic set of characters used in decipherment efforts 4 .",
"We provide an example of different transcriptions for the same symbol in Table 4 .",
"As can be noticed, the Unicode encoding is based on the GORILA transcription system.",
"Linear A Font The best Linear A Font available is LA.ttf, released by D.W. Borgdorff 5 in 2004.",
"In this font some arbitrary Unicode positions for Latin characters are mapped to Linear A symbols.",
"On one hand this allows the user to type Linear A symbols directly by pressing the keys on the keyboard; on the other hand, only transliterations can be produced.",
"The text eventually typed internally will be a series of Latin characters.",
"It should be remarked that this font would not be useful to make readable a Linear A corpus that is non-translittered and encoded in Unicode.",
"Unicode Linear A Characters Set On June 16th 2014, Version 7.0 of Unicode standard was released 6 , adding 2,834 new characters and including, finally, the Linear A character set.",
"Linear A block has been set in the range 10600-1077F and the order mainly follows GORILA's one 7 , as seen in Table 4 .",
"This Unicode Set covers simple signs, vase shapes, complex signs, complex signs with vase shapes, fractions and compound fractions.",
"This is a resource that opens, for the first time, the possibility to develop a Linear A digital corpus not consisting of a transliteration or alternative transcription.",
"Corpus data format Many scholars have faced the issues for data curation and considered various possibilities.",
"Among all the possible solutions, we chose to develop the Linear A Digital Corpus as a collection of TEI-EpiDoc XML documents.",
"In this section we explain why.",
"Why Digital?",
"Many epigraphic corpora have begun to be digitalized; there are many reasons to do so.",
"A digital corpus can include several representations of the inscriptions (Mahoney, 2007) : • pictures of the original document; • pictures of drawings or transcriptions made by hand simplifying the document; • diplomatic transcriptions; • edited texts; • translations; • commentaries.",
"Building a database is enough to get much richer features than the ones a paper corpus would provide.",
"The most visible feature of an epigraphic database is its utility as an Index Universalis (Gómez Pantoja and Álvarez, 2011); unlike hand-made indexes, there is no need to constrain the number of available search-keys.",
"Needless to say, the opportunity to have the data available also on the web is valuable.",
"Why Unicode?",
"Text processing must also take into account the writing systems represented in the corpus.",
"If the corpus consists of inscriptions written in the Latin alphabet, then the writing system of the inscriptions is the same as that of the Western European modern languages used for meta-data, translations, and commentaries.",
"In our case, unluckily, we have to deal with Linear A, so we need to find a way to represent our text.",
"Scholars objected to epigraphic databases on the ground of its poor graphic ability to represent non-Latin writing systems (García Barriocanal et al., 2011) .",
"This led to the use of non-standard fonts in some databases which probed to be a bad move, compromising overall compatibility and system upgrading.",
"This approach is appealing because if the corpus needs to be printed, sooner or later fonts will be a need in all cases.",
"The font-based solution assumes that all the software involved can recognize font-change markers.",
"Unluckily, some Database Management Systems (DMSs) do not allow changes of font within a text field and some export or interchange formats lose font information.",
"When the scripts of the corpus are all supported, which will be the case for any script still used by a living language, Unicode is a better approach.",
"Despite Minoan not being a living language, Linear A is finally part of the Unicode 7.0 Character Code Charts 8 but some sign groups conventionally interpreted as numbers have no Unicode representation.",
"Why XML?",
"Until not so long ago, markup systems have always involved special typographical symbols in the text-brackets, underdots, and so on.",
"Some epigraphers see XML as a natural transformation of what they have always done, with all the additional benefits that come from standardization within the community.",
"There is a growing consensus that XML is the best way to encode text.",
"Some corpora may also use the typographical marks of the Leiden system, which has the advan-<glyph xml:id=\"n5\"> <glyphName> Number 5 </glyphName> <mapping type=\"standardized\"> 5 </mapping> </glyph> tage of being entirely familiar to the epigraphers who create and maintain the corpus.",
"Unfortunately, the special brackets, underdots, and other typographical devices may not be supported by the character set of the computer system to be used.",
"A key incentive for using XML is the ability to exchange data with other projects.",
"It is convenient to be able to divide the information in many layers: cataloging, annotating, commenting and editing the inscriptions.",
"In some cases, merging different layers from different projects could be a need (for example when each of these projects is focused on a specific layer, for which provides the best quality), as a consequence the resulting data should be in compatible forms.",
"If the projects use the same Document Type Definition (DTD), in the same way, this is relatively easy.",
"While corpora that store their texts as wordprocessor files with Leiden markup can also share data, they must agree explicitly on the details of text layout, file formats, and character encodings.",
"With XML, it is possible to define either elements or entities for unsupported characters.",
"This feature is particularly interesting in our case, giving a solution for the numbers representation (Linear A numbers, except for fractions, have no Unicode representation).",
"Suppose you want to mark up the sign group , conventionally interpreted as the number 5, in the XML.",
"As specified in the TEI DTD, this could be expressed as <g ref=\"#n5\"/>, where the element g indicates a glyph, or a non-standard character and the attribute value points to the element glyph, which contains information about the specific glyph.",
"An example is given in Figure 1 .",
"Alternatively, the project might define an entity to represent this character.",
"Either way, the XML text notes that there is the Linear A number 5, and the later rendering of the text for display or printing can substitute the appropriate character in a known font, a picture of the character, or even a numeral from a different system.",
"Such approaches assume that tools are available for these conversions; some application, transformation, or stylesheet must have a way to know how to interpret the given element or entity.",
"The usage of XML provides two advantages: in first place, it makes possible the encoding of the characters that occur in the text (as shown above); in second place, it's really useful for encoding meta-information.",
"Why EpiDoc?",
"If a project decides to use XML, the most appropriate DTD (or schema) to be used needs to be chosen.",
"As in every other humanities discipline, the basic question is whether to use a general DTD, like the TEI, or to write a project-specific one.",
"Some projects need DTDs that are extremely specific to the types of inscriptions they are dealing with, instead other projects prefer to rely on existing, widely used DTDs.",
"Mahoney (2007) has deeply analyzed all the digitization issues, taking into account all the advantages and disadvantages of different approaches; her conclusion is that it's best to use EpiDoc 9 an XML encoding tool that could be also used to write structured documents compliant with the TEI standard 10 .",
"The EpiDoc DTD is the TEI, with a few epigraphically oriented customizations made using the standard TEI mechanisms.",
"Rather than writing a DTD for epigraphy from scratch, the Epi-Doc group uses the TEI because TEI has already addressed many of the taxonomic and semantic challenges faced by epigraphers, because the TEIusing community can provide a wide range of best-practice examples and guiding expertise, and because existing tooling built around TEI could easily lead to early and effective presentation and use of TEI-encoded epigraphic texts (Mahoney, 2007) .",
"The TEI and EpiDoc approaches have already been adopted by several epigraphic projects (Bodard, 2009 ), such as the Dêmos project (Furman University) and the corpus of Macedonian and Thracian inscriptions being compiled at KERA, the Research Center for Greek and Roman Antiquity at Athens (Mahoney, 2007) .",
"Also other scholars evaluate EpiDoc as a suitable choice.",
"Felle (2011) compares the EAGLE (Electronic Archive of Greek and Latin Epigraphy 11 ) project with the EpiDoc existing resources, viewing these resources as different but complementary.",
"Álvarez et al.",
"(2010) and Gómez Pantoja and Álvarez (2011) discuss the possibility of sharing Epigraphic Information as EpiDoc-based Linked Data and describe how they implemented a relational-to-linked data solution for the Hispania Epigraphica database.",
"Cayless (2003) evaluates EpiDoc as a relevant digital tool for Epigraphy allowing for a uniform representation of epigraphic metadata.",
"The EpiDoc guidelines are emerging as one standard for digital epigraphy with the TEI.",
"EpiDoc is not the only possible way to use the TEI for epigraphic texts but the tools, documentation, and examples 12 make it a good environment for new digitization projects as ours.",
"EpiDoc structure An EpiDoc document is structured as a standard TEI document with the teiHeader element including some initial Desc sections (fileDesc, encodingDesc, profileDesc, revisionDesc, etc) containing metadata, general information and descriptions (here we annotated place, period, kind of support and specific objects/fragments IDs).",
"An interesting use of encodingDesc is shown in Figure 1 above: the gliph element has to be defined inside its parent element charDecl and its grandparent element encodingDesc.",
"The teiHeader element is followed by the text element including the body element composed by a series of unnumbered <div>s, distinguished by their type attributes (we show an example of the Epidoc <div> element in Figure 2 ).",
"Typical divisions might include: • text itself (type=\"edition\"); • translation (type=\"translation\"); 11 http://www.eagle-eagle.it/ 12 http://wiki.tei-c.org/index.php/ Samples_of_TEI_texts • description (type=\"description\"; • commentary (type=\"commentary\"); • historical information(type=\"history\"); • bibliography (type=\"bibliography\").",
"<div lang=\"minoan\" n=\"text\" type=\"edition\" part=\"N\" sample=\"complete\" org=\"uniform\"> <head lang=\"eng\">Edition</head> <cb rend=\"front\" n=\"HM 1673\"/> <ab part=\"N\"> <lb n=\"1\"/> <w part=\"N\"> </w> <space dim=\"horizontal\" extent=\"1em\" unit=\"character\"/> <w part=\"N\"> </w> <lb n=\"2\"/> <w part=\"N\"> </w> <g ref=\"#n5\"/> <w part=\"N\"> </w> <lb n=\"3\"/> <w part=\"N\"> </w> <g ref=\"#n12\"/> <w part=\"N\"> </w> <lb n=\"4\"/> <w part=\"N\"> </w> <g ref=\"#n6\"/> <lb n=\"5\"/> <w part=\"N\"> </w> <lb n=\"6\"/> <g ref=\"#n4\"/> <w part=\"N\"> </w> <supplied reason=\"damage\"> </supplied> <gap extent=\"2em\" reason=\"lost\" unit=\"character\" dim=\"right\"/> </ab> </div> The EpiDoc DTD introduces a finite set of possible values for the type of a <div>, so that there is a clear distinction between sections covering different aspects, such as the commentary, the description or the archaeological history.",
"One advantage of structured markup is that editors can encode more information about how certain a particular feature is.",
"The date of an inscription, for example, can be encoded as a range of possible dates.",
"EpiDoc includes the TEI <certainty> element and the cert attribute to encourage editors to say whether or not they are completely confident of a given reading.",
"After some discussion, the EpiDoc community (Mahoney, 2007) decided that certainty should be expressed as a yes-or-no value: either the editor is certain of the reading, or not.",
"Gradual certainty is too complicated to manage and is best explained in the commentary.",
"Developing the Linear A Corpus The hope that computational approaches could help decipher Linear A, along with the evident lack of rich digital resources in this field, led us to develop this new resource.",
"In this section we describe which issues we faced and which solving strategies we used.",
"Data Collection Luckily the existence of Younger's website and GORILA volumes, together with the Raison-Pope Index, made possible a semi-automatic collection process, starting from syllabic transcriptions taken from Younger's website (with his permission), converting them in Unicode strings through Python scripts and acquiring all the metadata provided in Younger's transcriptions (location and support IDs, conservation place, periodization etc.).",
"Younger's resources on his website consist of two HTML pages, one containing inscriptions from Haghia Triada (that is the richest location in terms of documents found there) (Younger, 2000k) and the other containing documents from all the other locations (Younger, 2000l ).",
"Younger's transcriptions are well enriched with metadata.",
"The metadata convey the same information found in GORILA, including the Raison-Pope Index, plus some additional description of the support (this was not necessary in GORILA volumes, where the transcriptions are shown just next to the documents pictures) and the reference to the specific GORILA volume and pages.",
"Segmentation Issues When working on ancient writing systems, segmentation issues are expected to come up.",
"John G. Younger explains (Younger, 2000c ) that in Linear A separation is mainly indicated in two ways: first, by associating sign groups with numbers or logograms, thereby implying a separation; second, by placing a dot between two sign groups, thereby explicitly separating the sign groups or between a sign group and some other sign like a transaction sign or a logogram.",
"Younger also explains that in texts that employ a string of sign groups, dots are used to separate them and this practice is most notable on non-bureaucratic texts and especially in religious texts.",
"On his website, Younger also covers the hyphenization issue (Younger, 2000d) , explaining that in some cases we find a split across lines and the reason may involve separating prefixes from base words (the root of a sign group) or base words from their suffixes.",
"As Younger points out, this hypothesis would require evidence showing that affixes are involved.",
"The hyphenization issue is more complex to solve because a 'neutral' resource should avoid transcriptions implying a well known segmentation for Linear A sign groups.",
"In Younger's transcriptions, split sign groups are reunified in order to make it clearer when a known sign group is there.",
"Instead, our digital collection keeps the text as it is on the document, all the information about interpretations of such kind can be stored separately.",
"Obtaining Unicode transcriptions We managed to obtain Unicode encoded transcriptions by automatically converting Younger's phonetic transcriptions to GORILA transcriptions (manually checked against GORILA volumes) and then by automatically converting GORILA transcriptions to Unicode codes and printing them as Unicode characters (UTF-8 encoding).",
"In order to create the syllables-to-GORILA and the GORILA-to-Unicode dictionaries, we took into account Younger's conversion table mentioned in Subsection 2.4 and the official Unicode documentation (containing explicit Unicode-to-GORILA mapping information).",
"All these processing steps have been implemented through Python scripts.",
"XML annotation Once collected the whole corpus encoded in Unicode, we automatically added part of the XML annotation through a python script.",
"These documents have been later manually corrected and completed, checking against GORILA volumes.",
"A new Linear A font Before the Unicode 7.0 release, there was no way to visualize Unicode characters in the range 10600-1077F.",
"Even now, systems that are not updated may have trouble to visualize those characters.",
"Some implementations for Unicode support in certain contexts (for example for L A T E X's output) are not always up-to-date, so it is not obvious that the fonts for the most recent characters sets are available.",
"We decided to develop a new Linear A font, solving the main issue found in LA.ttf (wrong Unicode positions).",
"Starting from the official Unicode documentation, we created a set of symbols graphically similar to the official ones and aligned them to the right Unicode positions.",
"We decided to name the font John_Younger.ttf to show our appreciation for Younger's work.",
"He made the results of GORILA available to a wider public on digital media; this is the same goal we want to pursue by developing and distributing this font.",
"We released the font file at the following URL: http://openfontlibrary.",
"org/en/font/john-younger.",
"The Linear A Digital Corpus as cultural resource As stated by European Commission (2015) and UNESCO (2003) , the meaning of the notion of cultural heritage does not apply just to material objects and works of art, but also to 'intangible cultural heritage', as traditions and creative expressions.",
"In this perspective, linguistic corpora fit perfectly this definition; in fact, they contain information about tradition, knowledge and lifestyle of a certain culture.",
"Despite the fact that the Minoan language has not yet been deciphered, we know that the Linear A corpus provides interesting information concerning economy, commerce and religion.",
"As mentioned in Subsection 2.1, Schoep (2002) made a critical assessment of the Linear A tablets and their role in the administrative process, studying the physical supports.",
"Ruth Palmer (1995) made a deep study of commodities distributions (listing precise quantities and places) among Minoan centers, even without a full understanding of documents contents.",
"As Palmer points out, 'the ideograms for basic commodities, and the formats of the Linear A texts are similar enough to their Linear B counterparts to allow valid comparison of the types and amounts of commodities which appear in specific contexts'.",
"So, it's possible to have 'an idea of the focus of the economy' and of 'the scale and complexity of the transactions'.",
"From the linear A tablets, we can infer information about the resources management and administration system of Minoan centers.",
"Van den Kerkhof and Rem (2007) analyzed the Minoan libation formulas: religious inscriptions on cups, ladles and tables that were used in the offerings of oil and other powerful drinks at dawn.",
"The priestesses that carried out the Minoan libation ritual used all kinds of utensils, and they often inscribed their sacred formulas onto these objects.",
"Around thirty of these texts have survived (whole or in part) on libation tables, ladles and vases, written in various kinds of handwriting.",
"Transcripts of these religious inscriptions are available from Consani et al.",
"(1999) and from John G. Younger (2000m) on his website.",
"As noticed by Duhoux (1989) the Minoan libation formulas have a fixed structure with variable elements.",
"In fact, some studies (Davis, 2014) about Minoan syntax have been made by observing the sign groups order found in these regular formulas.",
"More importantly, the presence of olive-like ideograms could tell us that the Minoans used olive oil for libation (Van den Kerkhof and Rem, 2007) .",
"Beyond all these parts of the Minoan cultural heritage already available, a huge part is preserved there too: the Minoan language, with its hidden stories reflecting the life of a civilization.",
"We hope that our contribution can be useful to the community and that the Minoan, in its digital form, may finally be deciphered through computational approaches.",
"Future Work We are working on XSL style sheets in order to create suitable HTML pages.",
"All the data will be freely available and published at the following URL: http://ling.ied.edu.HK/ gregoire/lineara.",
"A further step will be developing a web interface to annotate, and dynamically enrich the corpus information."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"2.3",
"2.4",
"2.5",
"2.6",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"3.5",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"4.5",
"5",
"6"
],
"paper_header_content": [
"Introduction to Linear A and Minoan",
"Linear A available resources",
"Linear A documents",
"Godart and Olivier's Collection of Linear A Inscriptions",
"John G. Younger's website",
"GORILA symbols catalogue",
"Linear A Font",
"Unicode Linear A Characters Set",
"Corpus data format",
"Why Digital?",
"Why Unicode?",
"Why XML?",
"Why EpiDoc?",
"EpiDoc structure",
"Developing the Linear A Corpus",
"Data Collection",
"Segmentation Issues",
"Obtaining Unicode transcriptions",
"XML annotation",
"A new Linear A font",
"The Linear A Digital Corpus as cultural resource",
"Future Work"
]
} | GEM-SciDuet-train-29#paper-1038#slide-7 | Segmentation issues | Separation is mainly indicated in two ways:
by isolating sign groups with numbers or logograms, thereby implying a separation dots between sign groups, always used if there are long sign groups strings
Example: This is a Linear A line:
is a number (it is assumed to be a number 5) so and are assumed to be separated sign groups
Petrolito, Winterstein, Perono Cacciafoco Linear A Corpus 30 July 2015 | Separation is mainly indicated in two ways:
by isolating sign groups with numbers or logograms, thereby implying a separation dots between sign groups, always used if there are long sign groups strings
Example: This is a Linear A line:
is a number (it is assumed to be a number 5) so and are assumed to be separated sign groups
Petrolito, Winterstein, Perono Cacciafoco Linear A Corpus 30 July 2015 | [] |
GEM-SciDuet-train-29#paper-1038#slide-8 | 1038 | Minoan linguistic resources: The Linear A Digital Corpus | This paper describes the Linear A/Minoan digital corpus and the approaches we applied to develop it. We aim to set up a suitable study resource for Linear A and Minoan. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168
],
"paper_content_text": [
"Firstly we start by introducing Linear A and Minoan in order to make it clear why we should develop a digital marked up corpus of the existing Linear A transcriptions.",
"Secondly we list and describe some of the existing resources about Linear A: Linear A documents (seals, statuettes, vessels etc.",
"), the traditional encoding systems (standard code numbers referring to distinct symbols), a Linear A font, and the newest (released on June 16th 2014) Unicode Standard Characters set for Linear A. Thirdly we explain our choice concerning the data format: why we decided to digitize the Linear A resources; why we decided to convert all the transcriptions in standard Unicode characters; why we decided to use an XML format; why we decided to implement the TEI-EpiDoc DTD.",
"Lastly we describe: the developing process (from the data collection to the issues we faced and the solving strategies); a new font we developed (synchronized with the Unicode Characters Set) in order to make the data readable even on systems that are not updated.",
"Finally, we discuss the corpus we developed in a Cultural Heritage preservation perspective and suggest some future works.",
"Introduction to Linear A and Minoan Linear A is the script used by the Minoan Civilization (Cotterell, 1980) from 2500 to 1450 BC.",
"Writing system Time span Cretan Hieroglyphic 2100 -1700 BC Linear A 2500 -1450 BC Linear B 1450 -1200 BC The Minoan Civilization arose on the island of Crete in the Aegean Sea during the Bronze Age.",
"Minoan ruins and artifacts have been found mainly in Crete but also in other Greek islands and in mainland Greece, in Bulgaria, in Turkey and in Israel.",
"Linear A is not used anymore and, even after decades of studies (it was discovered by Sir Arthur Evans around 1900 (Evans, 1909) ), it still remains undeciphered.",
"All the assumptions and hypotheses made about Linear A and Minoan (its underlying language) are mainly based on the comparison with the well known Linear B, the famous child system originated by Linear A.",
"In fact, Linear B was fully deciphered during the 1950s by Michael Ventris 1 and was found to encode an ancient Greek dialect used by the Mycenaean civilization.",
"Archaeologist Arthur Evans named the script 'Linear' because it consisted just of lines inscribed in clay (Robinson, 2009) There have been several attempts to decipher Linear A and the Minoan Language.",
"We can divide the underlying hypotheses in six groups: Greek-like language (Nagy, 1963) , distinct Indo-European branch (Owens, 1999) , Anatolian language close to Luwian (Palmer, 1958) , archaic form of Phoenician (Dietrich and Loretz, 2001) , Indo-Iranian (Faure, 1998) and Etruscan-like language (Giulio M. Facchetti and Negri, 2003) .",
"There is also an interesting attempt (Younger, 2000b) to decipher single words, specifically toponyms, by applying Linear B phonetic values to the symbols shared by both Linear A and Linear B and following the assumption that toponyms are much more likely to survive as loans in Mycenaean Greek (written in Linear B); we show an example of this approach in Table 2.",
"In the next sections we describe the available existing resources concerning Linear A and the Linear A Digital Corpus: why and how we developed it.",
"Linear A available resources Even if Linear A and Linear B were discovered more than one century ago, Linear A has not been deciphered yet.",
"Nevertheless, many scholars worked on collecting and organizing all the available data in order to study and to decipher the script and the language.",
"Probably due to the fact that only historical linguists, philologists and archaeologists attempted to collect and organize all the existing data, nowadays a rich and well organized digital corpus is still not available.",
"In this section we describe all the available Linear A resources, including both physical documents and digital data.",
"Table 3 : Indexed types of support (Younger, 2000e) .",
"Linear A documents Linear A was written on a variety of media, such as stone offering tables, gold and silver hair pins, and pots (inked and inscribed).",
"The clay documents consist of tablets, roundels, and sealings (one-hole, two-hole, and flat-based).",
"Roundels are related to a \"conveyance of a commodity, either within the central administration or between the central administration and an external party\" (Palmer, 1995; Schoep, 2002) .",
"The roundel is the record of this transaction that stays within the central administration as the commodity moves out of the transacting bureau (Hallager, 1996) .",
"Two-hole sealings probably dangled from commodities brought into the center; onehole sealings apparently dangled from papyrus/parchment documents; flat-based sealings (themselves never inscribed) were pressed against the twine that secured papyrus/parchment documents (Younger, 2000g; Schoep, 2002) as shown by photographs (Müller, 1999) , (Müller, 2002) of the imprints that survive on the underside of flat-based sealings.",
"There are 1,427 Linear A documents containing 7,362-7,396 signs, much less than the quantity of data we have for Linear B (more than 4,600 documents containing 57,398 signs) (Younger, 2000f) .",
"Godart and Olivier's Collection of Linear A Inscriptions There is a complete and organized collection of Linear A documents on a paper corpus, the GORILA Louis Godart and Jean-Pierre Olivier, Recueil des inscriptions en Linéaire A (Godart and Olivier, 1976) .",
"Godart and Olivier have indexed the documents by original location and type of support, following the Raison-Pope Index (Raison and Pope, 1971) .",
"For example, the document AP Za 1 is from AP = Apodoulou and the support type is Za = stone vessels as shown in Table 3 .",
"Younger (2000h) provides a map with all the Cretan sites and one with all the Greek non-Cretan sites (Younger, 2000i) .",
"Godart and Olivier also provide referential data about conservation places (mainly museums), and periodization (for example: EM II = Second Early Minoan).",
"Since 1976, this has been the main source of data and point of reference about Linear A documents and it has set up the basis for further studies.",
"Even recent corpora, such as the Corpus transnuméré du linéaire A (Raison and Pope, 1994) , always refer to GORILA precise volumes and pages describing each document.",
"John G. Younger's website Younger (2000j) has published a website that is the best digital resource available (there is another interesting project, never completed, on Yannis Deliyannis's website 2 ).",
"It collects most of the existing inscriptions (taking GORILA as main source of data and point of reference) transcribed as Linear B phonetic values (like the KU-NI-SU transcription above).",
"The transcriptions are kept up to date and a complete restructuring in June 2015 has been announced (Younger, 2000j) .",
"GORILA symbols catalogue Many transcription systems have been defined.",
"The first one has been proposed by Raison and Pope (1971) and uses a string composed by one or two characters (Lm, L or Lc depending on the symbol, respectively metric, phonetic or compound) followed by a number, for example: L2.",
"This system has been widely used by many scholars such as David Woodley Packard (president of the Packard Humanities Institute 3 ), Colin Renfrew and Richard Janko (Packard, 1974; Renfrew, 1977; Janko, 1982) .",
"The second one, used in the GORILA collection (Godart and Olivier, 1976 ) and on John G. Younger's website, consists of a string composed by one or two characters (AB if the symbol is shared by Linear A and Linear B, A if the symbol is only used in Linear A) followed by a number and eventually other alphabetical characters (due to addenda and corrigenda to earlier versions), for example: AB03.",
"Many scholars transcribe the symbols shared by Linear A and B with the assumed phonetical/syllabic transcription.",
"This syllabic transcription is based on the corresponding Linear B phonetic values.",
"Younger (2000a) provides a conversion table of Pope and Raison's transcription system, GO-RILA's transcription system and his own phonetic/syllabic transcription system.",
"Developing our corpus, we worked mainly on Younger's syllabic and GORILA transcriptions, because the Unicode Linear A encoding is broadly based on the GORILA catalogue, which is also the basic set of characters used in decipherment efforts 4 .",
"We provide an example of different transcriptions for the same symbol in Table 4 .",
"As can be noticed, the Unicode encoding is based on the GORILA transcription system.",
"Linear A Font The best Linear A Font available is LA.ttf, released by D.W. Borgdorff 5 in 2004.",
"In this font some arbitrary Unicode positions for Latin characters are mapped to Linear A symbols.",
"On one hand this allows the user to type Linear A symbols directly by pressing the keys on the keyboard; on the other hand, only transliterations can be produced.",
"The text eventually typed internally will be a series of Latin characters.",
"It should be remarked that this font would not be useful to make readable a Linear A corpus that is non-translittered and encoded in Unicode.",
"Unicode Linear A Characters Set On June 16th 2014, Version 7.0 of Unicode standard was released 6 , adding 2,834 new characters and including, finally, the Linear A character set.",
"Linear A block has been set in the range 10600-1077F and the order mainly follows GORILA's one 7 , as seen in Table 4 .",
"This Unicode Set covers simple signs, vase shapes, complex signs, complex signs with vase shapes, fractions and compound fractions.",
"This is a resource that opens, for the first time, the possibility to develop a Linear A digital corpus not consisting of a transliteration or alternative transcription.",
"Corpus data format Many scholars have faced the issues for data curation and considered various possibilities.",
"Among all the possible solutions, we chose to develop the Linear A Digital Corpus as a collection of TEI-EpiDoc XML documents.",
"In this section we explain why.",
"Why Digital?",
"Many epigraphic corpora have begun to be digitalized; there are many reasons to do so.",
"A digital corpus can include several representations of the inscriptions (Mahoney, 2007) : • pictures of the original document; • pictures of drawings or transcriptions made by hand simplifying the document; • diplomatic transcriptions; • edited texts; • translations; • commentaries.",
"Building a database is enough to get much richer features than the ones a paper corpus would provide.",
"The most visible feature of an epigraphic database is its utility as an Index Universalis (Gómez Pantoja and Álvarez, 2011); unlike hand-made indexes, there is no need to constrain the number of available search-keys.",
"Needless to say, the opportunity to have the data available also on the web is valuable.",
"Why Unicode?",
"Text processing must also take into account the writing systems represented in the corpus.",
"If the corpus consists of inscriptions written in the Latin alphabet, then the writing system of the inscriptions is the same as that of the Western European modern languages used for meta-data, translations, and commentaries.",
"In our case, unluckily, we have to deal with Linear A, so we need to find a way to represent our text.",
"Scholars objected to epigraphic databases on the ground of its poor graphic ability to represent non-Latin writing systems (García Barriocanal et al., 2011) .",
"This led to the use of non-standard fonts in some databases which probed to be a bad move, compromising overall compatibility and system upgrading.",
"This approach is appealing because if the corpus needs to be printed, sooner or later fonts will be a need in all cases.",
"The font-based solution assumes that all the software involved can recognize font-change markers.",
"Unluckily, some Database Management Systems (DMSs) do not allow changes of font within a text field and some export or interchange formats lose font information.",
"When the scripts of the corpus are all supported, which will be the case for any script still used by a living language, Unicode is a better approach.",
"Despite Minoan not being a living language, Linear A is finally part of the Unicode 7.0 Character Code Charts 8 but some sign groups conventionally interpreted as numbers have no Unicode representation.",
"Why XML?",
"Until not so long ago, markup systems have always involved special typographical symbols in the text-brackets, underdots, and so on.",
"Some epigraphers see XML as a natural transformation of what they have always done, with all the additional benefits that come from standardization within the community.",
"There is a growing consensus that XML is the best way to encode text.",
"Some corpora may also use the typographical marks of the Leiden system, which has the advan-<glyph xml:id=\"n5\"> <glyphName> Number 5 </glyphName> <mapping type=\"standardized\"> 5 </mapping> </glyph> tage of being entirely familiar to the epigraphers who create and maintain the corpus.",
"Unfortunately, the special brackets, underdots, and other typographical devices may not be supported by the character set of the computer system to be used.",
"A key incentive for using XML is the ability to exchange data with other projects.",
"It is convenient to be able to divide the information in many layers: cataloging, annotating, commenting and editing the inscriptions.",
"In some cases, merging different layers from different projects could be a need (for example when each of these projects is focused on a specific layer, for which provides the best quality), as a consequence the resulting data should be in compatible forms.",
"If the projects use the same Document Type Definition (DTD), in the same way, this is relatively easy.",
"While corpora that store their texts as wordprocessor files with Leiden markup can also share data, they must agree explicitly on the details of text layout, file formats, and character encodings.",
"With XML, it is possible to define either elements or entities for unsupported characters.",
"This feature is particularly interesting in our case, giving a solution for the numbers representation (Linear A numbers, except for fractions, have no Unicode representation).",
"Suppose you want to mark up the sign group , conventionally interpreted as the number 5, in the XML.",
"As specified in the TEI DTD, this could be expressed as <g ref=\"#n5\"/>, where the element g indicates a glyph, or a non-standard character and the attribute value points to the element glyph, which contains information about the specific glyph.",
"An example is given in Figure 1 .",
"Alternatively, the project might define an entity to represent this character.",
"Either way, the XML text notes that there is the Linear A number 5, and the later rendering of the text for display or printing can substitute the appropriate character in a known font, a picture of the character, or even a numeral from a different system.",
"Such approaches assume that tools are available for these conversions; some application, transformation, or stylesheet must have a way to know how to interpret the given element or entity.",
"The usage of XML provides two advantages: in first place, it makes possible the encoding of the characters that occur in the text (as shown above); in second place, it's really useful for encoding meta-information.",
"Why EpiDoc?",
"If a project decides to use XML, the most appropriate DTD (or schema) to be used needs to be chosen.",
"As in every other humanities discipline, the basic question is whether to use a general DTD, like the TEI, or to write a project-specific one.",
"Some projects need DTDs that are extremely specific to the types of inscriptions they are dealing with, instead other projects prefer to rely on existing, widely used DTDs.",
"Mahoney (2007) has deeply analyzed all the digitization issues, taking into account all the advantages and disadvantages of different approaches; her conclusion is that it's best to use EpiDoc 9 an XML encoding tool that could be also used to write structured documents compliant with the TEI standard 10 .",
"The EpiDoc DTD is the TEI, with a few epigraphically oriented customizations made using the standard TEI mechanisms.",
"Rather than writing a DTD for epigraphy from scratch, the Epi-Doc group uses the TEI because TEI has already addressed many of the taxonomic and semantic challenges faced by epigraphers, because the TEIusing community can provide a wide range of best-practice examples and guiding expertise, and because existing tooling built around TEI could easily lead to early and effective presentation and use of TEI-encoded epigraphic texts (Mahoney, 2007) .",
"The TEI and EpiDoc approaches have already been adopted by several epigraphic projects (Bodard, 2009 ), such as the Dêmos project (Furman University) and the corpus of Macedonian and Thracian inscriptions being compiled at KERA, the Research Center for Greek and Roman Antiquity at Athens (Mahoney, 2007) .",
"Also other scholars evaluate EpiDoc as a suitable choice.",
"Felle (2011) compares the EAGLE (Electronic Archive of Greek and Latin Epigraphy 11 ) project with the EpiDoc existing resources, viewing these resources as different but complementary.",
"Álvarez et al.",
"(2010) and Gómez Pantoja and Álvarez (2011) discuss the possibility of sharing Epigraphic Information as EpiDoc-based Linked Data and describe how they implemented a relational-to-linked data solution for the Hispania Epigraphica database.",
"Cayless (2003) evaluates EpiDoc as a relevant digital tool for Epigraphy allowing for a uniform representation of epigraphic metadata.",
"The EpiDoc guidelines are emerging as one standard for digital epigraphy with the TEI.",
"EpiDoc is not the only possible way to use the TEI for epigraphic texts but the tools, documentation, and examples 12 make it a good environment for new digitization projects as ours.",
"EpiDoc structure An EpiDoc document is structured as a standard TEI document with the teiHeader element including some initial Desc sections (fileDesc, encodingDesc, profileDesc, revisionDesc, etc) containing metadata, general information and descriptions (here we annotated place, period, kind of support and specific objects/fragments IDs).",
"An interesting use of encodingDesc is shown in Figure 1 above: the gliph element has to be defined inside its parent element charDecl and its grandparent element encodingDesc.",
"The teiHeader element is followed by the text element including the body element composed by a series of unnumbered <div>s, distinguished by their type attributes (we show an example of the Epidoc <div> element in Figure 2 ).",
"Typical divisions might include: • text itself (type=\"edition\"); • translation (type=\"translation\"); 11 http://www.eagle-eagle.it/ 12 http://wiki.tei-c.org/index.php/ Samples_of_TEI_texts • description (type=\"description\"; • commentary (type=\"commentary\"); • historical information(type=\"history\"); • bibliography (type=\"bibliography\").",
"<div lang=\"minoan\" n=\"text\" type=\"edition\" part=\"N\" sample=\"complete\" org=\"uniform\"> <head lang=\"eng\">Edition</head> <cb rend=\"front\" n=\"HM 1673\"/> <ab part=\"N\"> <lb n=\"1\"/> <w part=\"N\"> </w> <space dim=\"horizontal\" extent=\"1em\" unit=\"character\"/> <w part=\"N\"> </w> <lb n=\"2\"/> <w part=\"N\"> </w> <g ref=\"#n5\"/> <w part=\"N\"> </w> <lb n=\"3\"/> <w part=\"N\"> </w> <g ref=\"#n12\"/> <w part=\"N\"> </w> <lb n=\"4\"/> <w part=\"N\"> </w> <g ref=\"#n6\"/> <lb n=\"5\"/> <w part=\"N\"> </w> <lb n=\"6\"/> <g ref=\"#n4\"/> <w part=\"N\"> </w> <supplied reason=\"damage\"> </supplied> <gap extent=\"2em\" reason=\"lost\" unit=\"character\" dim=\"right\"/> </ab> </div> The EpiDoc DTD introduces a finite set of possible values for the type of a <div>, so that there is a clear distinction between sections covering different aspects, such as the commentary, the description or the archaeological history.",
"One advantage of structured markup is that editors can encode more information about how certain a particular feature is.",
"The date of an inscription, for example, can be encoded as a range of possible dates.",
"EpiDoc includes the TEI <certainty> element and the cert attribute to encourage editors to say whether or not they are completely confident of a given reading.",
"After some discussion, the EpiDoc community (Mahoney, 2007) decided that certainty should be expressed as a yes-or-no value: either the editor is certain of the reading, or not.",
"Gradual certainty is too complicated to manage and is best explained in the commentary.",
"Developing the Linear A Corpus The hope that computational approaches could help decipher Linear A, along with the evident lack of rich digital resources in this field, led us to develop this new resource.",
"In this section we describe which issues we faced and which solving strategies we used.",
"Data Collection Luckily the existence of Younger's website and GORILA volumes, together with the Raison-Pope Index, made possible a semi-automatic collection process, starting from syllabic transcriptions taken from Younger's website (with his permission), converting them in Unicode strings through Python scripts and acquiring all the metadata provided in Younger's transcriptions (location and support IDs, conservation place, periodization etc.).",
"Younger's resources on his website consist of two HTML pages, one containing inscriptions from Haghia Triada (that is the richest location in terms of documents found there) (Younger, 2000k) and the other containing documents from all the other locations (Younger, 2000l ).",
"Younger's transcriptions are well enriched with metadata.",
"The metadata convey the same information found in GORILA, including the Raison-Pope Index, plus some additional description of the support (this was not necessary in GORILA volumes, where the transcriptions are shown just next to the documents pictures) and the reference to the specific GORILA volume and pages.",
"Segmentation Issues When working on ancient writing systems, segmentation issues are expected to come up.",
"John G. Younger explains (Younger, 2000c ) that in Linear A separation is mainly indicated in two ways: first, by associating sign groups with numbers or logograms, thereby implying a separation; second, by placing a dot between two sign groups, thereby explicitly separating the sign groups or between a sign group and some other sign like a transaction sign or a logogram.",
"Younger also explains that in texts that employ a string of sign groups, dots are used to separate them and this practice is most notable on non-bureaucratic texts and especially in religious texts.",
"On his website, Younger also covers the hyphenization issue (Younger, 2000d) , explaining that in some cases we find a split across lines and the reason may involve separating prefixes from base words (the root of a sign group) or base words from their suffixes.",
"As Younger points out, this hypothesis would require evidence showing that affixes are involved.",
"The hyphenization issue is more complex to solve because a 'neutral' resource should avoid transcriptions implying a well known segmentation for Linear A sign groups.",
"In Younger's transcriptions, split sign groups are reunified in order to make it clearer when a known sign group is there.",
"Instead, our digital collection keeps the text as it is on the document, all the information about interpretations of such kind can be stored separately.",
"Obtaining Unicode transcriptions We managed to obtain Unicode encoded transcriptions by automatically converting Younger's phonetic transcriptions to GORILA transcriptions (manually checked against GORILA volumes) and then by automatically converting GORILA transcriptions to Unicode codes and printing them as Unicode characters (UTF-8 encoding).",
"In order to create the syllables-to-GORILA and the GORILA-to-Unicode dictionaries, we took into account Younger's conversion table mentioned in Subsection 2.4 and the official Unicode documentation (containing explicit Unicode-to-GORILA mapping information).",
"All these processing steps have been implemented through Python scripts.",
"XML annotation Once collected the whole corpus encoded in Unicode, we automatically added part of the XML annotation through a python script.",
"These documents have been later manually corrected and completed, checking against GORILA volumes.",
"A new Linear A font Before the Unicode 7.0 release, there was no way to visualize Unicode characters in the range 10600-1077F.",
"Even now, systems that are not updated may have trouble to visualize those characters.",
"Some implementations for Unicode support in certain contexts (for example for L A T E X's output) are not always up-to-date, so it is not obvious that the fonts for the most recent characters sets are available.",
"We decided to develop a new Linear A font, solving the main issue found in LA.ttf (wrong Unicode positions).",
"Starting from the official Unicode documentation, we created a set of symbols graphically similar to the official ones and aligned them to the right Unicode positions.",
"We decided to name the font John_Younger.ttf to show our appreciation for Younger's work.",
"He made the results of GORILA available to a wider public on digital media; this is the same goal we want to pursue by developing and distributing this font.",
"We released the font file at the following URL: http://openfontlibrary.",
"org/en/font/john-younger.",
"The Linear A Digital Corpus as cultural resource As stated by European Commission (2015) and UNESCO (2003) , the meaning of the notion of cultural heritage does not apply just to material objects and works of art, but also to 'intangible cultural heritage', as traditions and creative expressions.",
"In this perspective, linguistic corpora fit perfectly this definition; in fact, they contain information about tradition, knowledge and lifestyle of a certain culture.",
"Despite the fact that the Minoan language has not yet been deciphered, we know that the Linear A corpus provides interesting information concerning economy, commerce and religion.",
"As mentioned in Subsection 2.1, Schoep (2002) made a critical assessment of the Linear A tablets and their role in the administrative process, studying the physical supports.",
"Ruth Palmer (1995) made a deep study of commodities distributions (listing precise quantities and places) among Minoan centers, even without a full understanding of documents contents.",
"As Palmer points out, 'the ideograms for basic commodities, and the formats of the Linear A texts are similar enough to their Linear B counterparts to allow valid comparison of the types and amounts of commodities which appear in specific contexts'.",
"So, it's possible to have 'an idea of the focus of the economy' and of 'the scale and complexity of the transactions'.",
"From the linear A tablets, we can infer information about the resources management and administration system of Minoan centers.",
"Van den Kerkhof and Rem (2007) analyzed the Minoan libation formulas: religious inscriptions on cups, ladles and tables that were used in the offerings of oil and other powerful drinks at dawn.",
"The priestesses that carried out the Minoan libation ritual used all kinds of utensils, and they often inscribed their sacred formulas onto these objects.",
"Around thirty of these texts have survived (whole or in part) on libation tables, ladles and vases, written in various kinds of handwriting.",
"Transcripts of these religious inscriptions are available from Consani et al.",
"(1999) and from John G. Younger (2000m) on his website.",
"As noticed by Duhoux (1989) the Minoan libation formulas have a fixed structure with variable elements.",
"In fact, some studies (Davis, 2014) about Minoan syntax have been made by observing the sign groups order found in these regular formulas.",
"More importantly, the presence of olive-like ideograms could tell us that the Minoans used olive oil for libation (Van den Kerkhof and Rem, 2007) .",
"Beyond all these parts of the Minoan cultural heritage already available, a huge part is preserved there too: the Minoan language, with its hidden stories reflecting the life of a civilization.",
"We hope that our contribution can be useful to the community and that the Minoan, in its digital form, may finally be deciphered through computational approaches.",
"Future Work We are working on XSL style sheets in order to create suitable HTML pages.",
"All the data will be freely available and published at the following URL: http://ling.ied.edu.HK/ gregoire/lineara.",
"A further step will be developing a web interface to annotate, and dynamically enrich the corpus information."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"2.3",
"2.4",
"2.5",
"2.6",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"3.5",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"4.5",
"5",
"6"
],
"paper_header_content": [
"Introduction to Linear A and Minoan",
"Linear A available resources",
"Linear A documents",
"Godart and Olivier's Collection of Linear A Inscriptions",
"John G. Younger's website",
"GORILA symbols catalogue",
"Linear A Font",
"Unicode Linear A Characters Set",
"Corpus data format",
"Why Digital?",
"Why Unicode?",
"Why XML?",
"Why EpiDoc?",
"EpiDoc structure",
"Developing the Linear A Corpus",
"Data Collection",
"Segmentation Issues",
"Obtaining Unicode transcriptions",
"XML annotation",
"A new Linear A font",
"The Linear A Digital Corpus as cultural resource",
"Future Work"
]
} | GEM-SciDuet-train-29#paper-1038#slide-8 | Corpus data format | XML provides important advantages metadata on several levels of annotation elements and entities for unsupported glyphs or symbols
EpiDoc is a TEI DTD with customization for Epigraphy
TEI-using community can provide support a wide range of best-practice examples are available online
The old Leiden system annotation task, familiar to epigraphers, is quite similar to the XML TEI EpiDoc annotation process
Petrolito, Winterstein, Perono Cacciafoco Linear A Corpus 30 July 2015 | XML provides important advantages metadata on several levels of annotation elements and entities for unsupported glyphs or symbols
EpiDoc is a TEI DTD with customization for Epigraphy
TEI-using community can provide support a wide range of best-practice examples are available online
The old Leiden system annotation task, familiar to epigraphers, is quite similar to the XML TEI EpiDoc annotation process
Petrolito, Winterstein, Perono Cacciafoco Linear A Corpus 30 July 2015 | [] |
GEM-SciDuet-train-29#paper-1038#slide-9 | 1038 | Minoan linguistic resources: The Linear A Digital Corpus | This paper describes the Linear A/Minoan digital corpus and the approaches we applied to develop it. We aim to set up a suitable study resource for Linear A and Minoan. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168
],
"paper_content_text": [
"Firstly we start by introducing Linear A and Minoan in order to make it clear why we should develop a digital marked up corpus of the existing Linear A transcriptions.",
"Secondly we list and describe some of the existing resources about Linear A: Linear A documents (seals, statuettes, vessels etc.",
"), the traditional encoding systems (standard code numbers referring to distinct symbols), a Linear A font, and the newest (released on June 16th 2014) Unicode Standard Characters set for Linear A. Thirdly we explain our choice concerning the data format: why we decided to digitize the Linear A resources; why we decided to convert all the transcriptions in standard Unicode characters; why we decided to use an XML format; why we decided to implement the TEI-EpiDoc DTD.",
"Lastly we describe: the developing process (from the data collection to the issues we faced and the solving strategies); a new font we developed (synchronized with the Unicode Characters Set) in order to make the data readable even on systems that are not updated.",
"Finally, we discuss the corpus we developed in a Cultural Heritage preservation perspective and suggest some future works.",
"Introduction to Linear A and Minoan Linear A is the script used by the Minoan Civilization (Cotterell, 1980) from 2500 to 1450 BC.",
"Writing system Time span Cretan Hieroglyphic 2100 -1700 BC Linear A 2500 -1450 BC Linear B 1450 -1200 BC The Minoan Civilization arose on the island of Crete in the Aegean Sea during the Bronze Age.",
"Minoan ruins and artifacts have been found mainly in Crete but also in other Greek islands and in mainland Greece, in Bulgaria, in Turkey and in Israel.",
"Linear A is not used anymore and, even after decades of studies (it was discovered by Sir Arthur Evans around 1900 (Evans, 1909) ), it still remains undeciphered.",
"All the assumptions and hypotheses made about Linear A and Minoan (its underlying language) are mainly based on the comparison with the well known Linear B, the famous child system originated by Linear A.",
"In fact, Linear B was fully deciphered during the 1950s by Michael Ventris 1 and was found to encode an ancient Greek dialect used by the Mycenaean civilization.",
"Archaeologist Arthur Evans named the script 'Linear' because it consisted just of lines inscribed in clay (Robinson, 2009) There have been several attempts to decipher Linear A and the Minoan Language.",
"We can divide the underlying hypotheses in six groups: Greek-like language (Nagy, 1963) , distinct Indo-European branch (Owens, 1999) , Anatolian language close to Luwian (Palmer, 1958) , archaic form of Phoenician (Dietrich and Loretz, 2001) , Indo-Iranian (Faure, 1998) and Etruscan-like language (Giulio M. Facchetti and Negri, 2003) .",
"There is also an interesting attempt (Younger, 2000b) to decipher single words, specifically toponyms, by applying Linear B phonetic values to the symbols shared by both Linear A and Linear B and following the assumption that toponyms are much more likely to survive as loans in Mycenaean Greek (written in Linear B); we show an example of this approach in Table 2.",
"In the next sections we describe the available existing resources concerning Linear A and the Linear A Digital Corpus: why and how we developed it.",
"Linear A available resources Even if Linear A and Linear B were discovered more than one century ago, Linear A has not been deciphered yet.",
"Nevertheless, many scholars worked on collecting and organizing all the available data in order to study and to decipher the script and the language.",
"Probably due to the fact that only historical linguists, philologists and archaeologists attempted to collect and organize all the existing data, nowadays a rich and well organized digital corpus is still not available.",
"In this section we describe all the available Linear A resources, including both physical documents and digital data.",
"Table 3 : Indexed types of support (Younger, 2000e) .",
"Linear A documents Linear A was written on a variety of media, such as stone offering tables, gold and silver hair pins, and pots (inked and inscribed).",
"The clay documents consist of tablets, roundels, and sealings (one-hole, two-hole, and flat-based).",
"Roundels are related to a \"conveyance of a commodity, either within the central administration or between the central administration and an external party\" (Palmer, 1995; Schoep, 2002) .",
"The roundel is the record of this transaction that stays within the central administration as the commodity moves out of the transacting bureau (Hallager, 1996) .",
"Two-hole sealings probably dangled from commodities brought into the center; onehole sealings apparently dangled from papyrus/parchment documents; flat-based sealings (themselves never inscribed) were pressed against the twine that secured papyrus/parchment documents (Younger, 2000g; Schoep, 2002) as shown by photographs (Müller, 1999) , (Müller, 2002) of the imprints that survive on the underside of flat-based sealings.",
"There are 1,427 Linear A documents containing 7,362-7,396 signs, much less than the quantity of data we have for Linear B (more than 4,600 documents containing 57,398 signs) (Younger, 2000f) .",
"Godart and Olivier's Collection of Linear A Inscriptions There is a complete and organized collection of Linear A documents on a paper corpus, the GORILA Louis Godart and Jean-Pierre Olivier, Recueil des inscriptions en Linéaire A (Godart and Olivier, 1976) .",
"Godart and Olivier have indexed the documents by original location and type of support, following the Raison-Pope Index (Raison and Pope, 1971) .",
"For example, the document AP Za 1 is from AP = Apodoulou and the support type is Za = stone vessels as shown in Table 3 .",
"Younger (2000h) provides a map with all the Cretan sites and one with all the Greek non-Cretan sites (Younger, 2000i) .",
"Godart and Olivier also provide referential data about conservation places (mainly museums), and periodization (for example: EM II = Second Early Minoan).",
"Since 1976, this has been the main source of data and point of reference about Linear A documents and it has set up the basis for further studies.",
"Even recent corpora, such as the Corpus transnuméré du linéaire A (Raison and Pope, 1994) , always refer to GORILA precise volumes and pages describing each document.",
"John G. Younger's website Younger (2000j) has published a website that is the best digital resource available (there is another interesting project, never completed, on Yannis Deliyannis's website 2 ).",
"It collects most of the existing inscriptions (taking GORILA as main source of data and point of reference) transcribed as Linear B phonetic values (like the KU-NI-SU transcription above).",
"The transcriptions are kept up to date and a complete restructuring in June 2015 has been announced (Younger, 2000j) .",
"GORILA symbols catalogue Many transcription systems have been defined.",
"The first one has been proposed by Raison and Pope (1971) and uses a string composed by one or two characters (Lm, L or Lc depending on the symbol, respectively metric, phonetic or compound) followed by a number, for example: L2.",
"This system has been widely used by many scholars such as David Woodley Packard (president of the Packard Humanities Institute 3 ), Colin Renfrew and Richard Janko (Packard, 1974; Renfrew, 1977; Janko, 1982) .",
"The second one, used in the GORILA collection (Godart and Olivier, 1976 ) and on John G. Younger's website, consists of a string composed by one or two characters (AB if the symbol is shared by Linear A and Linear B, A if the symbol is only used in Linear A) followed by a number and eventually other alphabetical characters (due to addenda and corrigenda to earlier versions), for example: AB03.",
"Many scholars transcribe the symbols shared by Linear A and B with the assumed phonetical/syllabic transcription.",
"This syllabic transcription is based on the corresponding Linear B phonetic values.",
"Younger (2000a) provides a conversion table of Pope and Raison's transcription system, GO-RILA's transcription system and his own phonetic/syllabic transcription system.",
"Developing our corpus, we worked mainly on Younger's syllabic and GORILA transcriptions, because the Unicode Linear A encoding is broadly based on the GORILA catalogue, which is also the basic set of characters used in decipherment efforts 4 .",
"We provide an example of different transcriptions for the same symbol in Table 4 .",
"As can be noticed, the Unicode encoding is based on the GORILA transcription system.",
"Linear A Font The best Linear A Font available is LA.ttf, released by D.W. Borgdorff 5 in 2004.",
"In this font some arbitrary Unicode positions for Latin characters are mapped to Linear A symbols.",
"On one hand this allows the user to type Linear A symbols directly by pressing the keys on the keyboard; on the other hand, only transliterations can be produced.",
"The text eventually typed internally will be a series of Latin characters.",
"It should be remarked that this font would not be useful to make readable a Linear A corpus that is non-translittered and encoded in Unicode.",
"Unicode Linear A Characters Set On June 16th 2014, Version 7.0 of Unicode standard was released 6 , adding 2,834 new characters and including, finally, the Linear A character set.",
"Linear A block has been set in the range 10600-1077F and the order mainly follows GORILA's one 7 , as seen in Table 4 .",
"This Unicode Set covers simple signs, vase shapes, complex signs, complex signs with vase shapes, fractions and compound fractions.",
"This is a resource that opens, for the first time, the possibility to develop a Linear A digital corpus not consisting of a transliteration or alternative transcription.",
"Corpus data format Many scholars have faced the issues for data curation and considered various possibilities.",
"Among all the possible solutions, we chose to develop the Linear A Digital Corpus as a collection of TEI-EpiDoc XML documents.",
"In this section we explain why.",
"Why Digital?",
"Many epigraphic corpora have begun to be digitalized; there are many reasons to do so.",
"A digital corpus can include several representations of the inscriptions (Mahoney, 2007) : • pictures of the original document; • pictures of drawings or transcriptions made by hand simplifying the document; • diplomatic transcriptions; • edited texts; • translations; • commentaries.",
"Building a database is enough to get much richer features than the ones a paper corpus would provide.",
"The most visible feature of an epigraphic database is its utility as an Index Universalis (Gómez Pantoja and Álvarez, 2011); unlike hand-made indexes, there is no need to constrain the number of available search-keys.",
"Needless to say, the opportunity to have the data available also on the web is valuable.",
"Why Unicode?",
"Text processing must also take into account the writing systems represented in the corpus.",
"If the corpus consists of inscriptions written in the Latin alphabet, then the writing system of the inscriptions is the same as that of the Western European modern languages used for meta-data, translations, and commentaries.",
"In our case, unluckily, we have to deal with Linear A, so we need to find a way to represent our text.",
"Scholars objected to epigraphic databases on the ground of its poor graphic ability to represent non-Latin writing systems (García Barriocanal et al., 2011) .",
"This led to the use of non-standard fonts in some databases which probed to be a bad move, compromising overall compatibility and system upgrading.",
"This approach is appealing because if the corpus needs to be printed, sooner or later fonts will be a need in all cases.",
"The font-based solution assumes that all the software involved can recognize font-change markers.",
"Unluckily, some Database Management Systems (DMSs) do not allow changes of font within a text field and some export or interchange formats lose font information.",
"When the scripts of the corpus are all supported, which will be the case for any script still used by a living language, Unicode is a better approach.",
"Despite Minoan not being a living language, Linear A is finally part of the Unicode 7.0 Character Code Charts 8 but some sign groups conventionally interpreted as numbers have no Unicode representation.",
"Why XML?",
"Until not so long ago, markup systems have always involved special typographical symbols in the text-brackets, underdots, and so on.",
"Some epigraphers see XML as a natural transformation of what they have always done, with all the additional benefits that come from standardization within the community.",
"There is a growing consensus that XML is the best way to encode text.",
"Some corpora may also use the typographical marks of the Leiden system, which has the advan-<glyph xml:id=\"n5\"> <glyphName> Number 5 </glyphName> <mapping type=\"standardized\"> 5 </mapping> </glyph> tage of being entirely familiar to the epigraphers who create and maintain the corpus.",
"Unfortunately, the special brackets, underdots, and other typographical devices may not be supported by the character set of the computer system to be used.",
"A key incentive for using XML is the ability to exchange data with other projects.",
"It is convenient to be able to divide the information in many layers: cataloging, annotating, commenting and editing the inscriptions.",
"In some cases, merging different layers from different projects could be a need (for example when each of these projects is focused on a specific layer, for which provides the best quality), as a consequence the resulting data should be in compatible forms.",
"If the projects use the same Document Type Definition (DTD), in the same way, this is relatively easy.",
"While corpora that store their texts as wordprocessor files with Leiden markup can also share data, they must agree explicitly on the details of text layout, file formats, and character encodings.",
"With XML, it is possible to define either elements or entities for unsupported characters.",
"This feature is particularly interesting in our case, giving a solution for the numbers representation (Linear A numbers, except for fractions, have no Unicode representation).",
"Suppose you want to mark up the sign group , conventionally interpreted as the number 5, in the XML.",
"As specified in the TEI DTD, this could be expressed as <g ref=\"#n5\"/>, where the element g indicates a glyph, or a non-standard character and the attribute value points to the element glyph, which contains information about the specific glyph.",
"An example is given in Figure 1 .",
"Alternatively, the project might define an entity to represent this character.",
"Either way, the XML text notes that there is the Linear A number 5, and the later rendering of the text for display or printing can substitute the appropriate character in a known font, a picture of the character, or even a numeral from a different system.",
"Such approaches assume that tools are available for these conversions; some application, transformation, or stylesheet must have a way to know how to interpret the given element or entity.",
"The usage of XML provides two advantages: in first place, it makes possible the encoding of the characters that occur in the text (as shown above); in second place, it's really useful for encoding meta-information.",
"Why EpiDoc?",
"If a project decides to use XML, the most appropriate DTD (or schema) to be used needs to be chosen.",
"As in every other humanities discipline, the basic question is whether to use a general DTD, like the TEI, or to write a project-specific one.",
"Some projects need DTDs that are extremely specific to the types of inscriptions they are dealing with, instead other projects prefer to rely on existing, widely used DTDs.",
"Mahoney (2007) has deeply analyzed all the digitization issues, taking into account all the advantages and disadvantages of different approaches; her conclusion is that it's best to use EpiDoc 9 an XML encoding tool that could be also used to write structured documents compliant with the TEI standard 10 .",
"The EpiDoc DTD is the TEI, with a few epigraphically oriented customizations made using the standard TEI mechanisms.",
"Rather than writing a DTD for epigraphy from scratch, the Epi-Doc group uses the TEI because TEI has already addressed many of the taxonomic and semantic challenges faced by epigraphers, because the TEIusing community can provide a wide range of best-practice examples and guiding expertise, and because existing tooling built around TEI could easily lead to early and effective presentation and use of TEI-encoded epigraphic texts (Mahoney, 2007) .",
"The TEI and EpiDoc approaches have already been adopted by several epigraphic projects (Bodard, 2009 ), such as the Dêmos project (Furman University) and the corpus of Macedonian and Thracian inscriptions being compiled at KERA, the Research Center for Greek and Roman Antiquity at Athens (Mahoney, 2007) .",
"Also other scholars evaluate EpiDoc as a suitable choice.",
"Felle (2011) compares the EAGLE (Electronic Archive of Greek and Latin Epigraphy 11 ) project with the EpiDoc existing resources, viewing these resources as different but complementary.",
"Álvarez et al.",
"(2010) and Gómez Pantoja and Álvarez (2011) discuss the possibility of sharing Epigraphic Information as EpiDoc-based Linked Data and describe how they implemented a relational-to-linked data solution for the Hispania Epigraphica database.",
"Cayless (2003) evaluates EpiDoc as a relevant digital tool for Epigraphy allowing for a uniform representation of epigraphic metadata.",
"The EpiDoc guidelines are emerging as one standard for digital epigraphy with the TEI.",
"EpiDoc is not the only possible way to use the TEI for epigraphic texts but the tools, documentation, and examples 12 make it a good environment for new digitization projects as ours.",
"EpiDoc structure An EpiDoc document is structured as a standard TEI document with the teiHeader element including some initial Desc sections (fileDesc, encodingDesc, profileDesc, revisionDesc, etc) containing metadata, general information and descriptions (here we annotated place, period, kind of support and specific objects/fragments IDs).",
"An interesting use of encodingDesc is shown in Figure 1 above: the gliph element has to be defined inside its parent element charDecl and its grandparent element encodingDesc.",
"The teiHeader element is followed by the text element including the body element composed by a series of unnumbered <div>s, distinguished by their type attributes (we show an example of the Epidoc <div> element in Figure 2 ).",
"Typical divisions might include: • text itself (type=\"edition\"); • translation (type=\"translation\"); 11 http://www.eagle-eagle.it/ 12 http://wiki.tei-c.org/index.php/ Samples_of_TEI_texts • description (type=\"description\"; • commentary (type=\"commentary\"); • historical information(type=\"history\"); • bibliography (type=\"bibliography\").",
"<div lang=\"minoan\" n=\"text\" type=\"edition\" part=\"N\" sample=\"complete\" org=\"uniform\"> <head lang=\"eng\">Edition</head> <cb rend=\"front\" n=\"HM 1673\"/> <ab part=\"N\"> <lb n=\"1\"/> <w part=\"N\"> </w> <space dim=\"horizontal\" extent=\"1em\" unit=\"character\"/> <w part=\"N\"> </w> <lb n=\"2\"/> <w part=\"N\"> </w> <g ref=\"#n5\"/> <w part=\"N\"> </w> <lb n=\"3\"/> <w part=\"N\"> </w> <g ref=\"#n12\"/> <w part=\"N\"> </w> <lb n=\"4\"/> <w part=\"N\"> </w> <g ref=\"#n6\"/> <lb n=\"5\"/> <w part=\"N\"> </w> <lb n=\"6\"/> <g ref=\"#n4\"/> <w part=\"N\"> </w> <supplied reason=\"damage\"> </supplied> <gap extent=\"2em\" reason=\"lost\" unit=\"character\" dim=\"right\"/> </ab> </div> The EpiDoc DTD introduces a finite set of possible values for the type of a <div>, so that there is a clear distinction between sections covering different aspects, such as the commentary, the description or the archaeological history.",
"One advantage of structured markup is that editors can encode more information about how certain a particular feature is.",
"The date of an inscription, for example, can be encoded as a range of possible dates.",
"EpiDoc includes the TEI <certainty> element and the cert attribute to encourage editors to say whether or not they are completely confident of a given reading.",
"After some discussion, the EpiDoc community (Mahoney, 2007) decided that certainty should be expressed as a yes-or-no value: either the editor is certain of the reading, or not.",
"Gradual certainty is too complicated to manage and is best explained in the commentary.",
"Developing the Linear A Corpus The hope that computational approaches could help decipher Linear A, along with the evident lack of rich digital resources in this field, led us to develop this new resource.",
"In this section we describe which issues we faced and which solving strategies we used.",
"Data Collection Luckily the existence of Younger's website and GORILA volumes, together with the Raison-Pope Index, made possible a semi-automatic collection process, starting from syllabic transcriptions taken from Younger's website (with his permission), converting them in Unicode strings through Python scripts and acquiring all the metadata provided in Younger's transcriptions (location and support IDs, conservation place, periodization etc.).",
"Younger's resources on his website consist of two HTML pages, one containing inscriptions from Haghia Triada (that is the richest location in terms of documents found there) (Younger, 2000k) and the other containing documents from all the other locations (Younger, 2000l ).",
"Younger's transcriptions are well enriched with metadata.",
"The metadata convey the same information found in GORILA, including the Raison-Pope Index, plus some additional description of the support (this was not necessary in GORILA volumes, where the transcriptions are shown just next to the documents pictures) and the reference to the specific GORILA volume and pages.",
"Segmentation Issues When working on ancient writing systems, segmentation issues are expected to come up.",
"John G. Younger explains (Younger, 2000c ) that in Linear A separation is mainly indicated in two ways: first, by associating sign groups with numbers or logograms, thereby implying a separation; second, by placing a dot between two sign groups, thereby explicitly separating the sign groups or between a sign group and some other sign like a transaction sign or a logogram.",
"Younger also explains that in texts that employ a string of sign groups, dots are used to separate them and this practice is most notable on non-bureaucratic texts and especially in religious texts.",
"On his website, Younger also covers the hyphenization issue (Younger, 2000d) , explaining that in some cases we find a split across lines and the reason may involve separating prefixes from base words (the root of a sign group) or base words from their suffixes.",
"As Younger points out, this hypothesis would require evidence showing that affixes are involved.",
"The hyphenization issue is more complex to solve because a 'neutral' resource should avoid transcriptions implying a well known segmentation for Linear A sign groups.",
"In Younger's transcriptions, split sign groups are reunified in order to make it clearer when a known sign group is there.",
"Instead, our digital collection keeps the text as it is on the document, all the information about interpretations of such kind can be stored separately.",
"Obtaining Unicode transcriptions We managed to obtain Unicode encoded transcriptions by automatically converting Younger's phonetic transcriptions to GORILA transcriptions (manually checked against GORILA volumes) and then by automatically converting GORILA transcriptions to Unicode codes and printing them as Unicode characters (UTF-8 encoding).",
"In order to create the syllables-to-GORILA and the GORILA-to-Unicode dictionaries, we took into account Younger's conversion table mentioned in Subsection 2.4 and the official Unicode documentation (containing explicit Unicode-to-GORILA mapping information).",
"All these processing steps have been implemented through Python scripts.",
"XML annotation Once collected the whole corpus encoded in Unicode, we automatically added part of the XML annotation through a python script.",
"These documents have been later manually corrected and completed, checking against GORILA volumes.",
"A new Linear A font Before the Unicode 7.0 release, there was no way to visualize Unicode characters in the range 10600-1077F.",
"Even now, systems that are not updated may have trouble to visualize those characters.",
"Some implementations for Unicode support in certain contexts (for example for L A T E X's output) are not always up-to-date, so it is not obvious that the fonts for the most recent characters sets are available.",
"We decided to develop a new Linear A font, solving the main issue found in LA.ttf (wrong Unicode positions).",
"Starting from the official Unicode documentation, we created a set of symbols graphically similar to the official ones and aligned them to the right Unicode positions.",
"We decided to name the font John_Younger.ttf to show our appreciation for Younger's work.",
"He made the results of GORILA available to a wider public on digital media; this is the same goal we want to pursue by developing and distributing this font.",
"We released the font file at the following URL: http://openfontlibrary.",
"org/en/font/john-younger.",
"The Linear A Digital Corpus as cultural resource As stated by European Commission (2015) and UNESCO (2003) , the meaning of the notion of cultural heritage does not apply just to material objects and works of art, but also to 'intangible cultural heritage', as traditions and creative expressions.",
"In this perspective, linguistic corpora fit perfectly this definition; in fact, they contain information about tradition, knowledge and lifestyle of a certain culture.",
"Despite the fact that the Minoan language has not yet been deciphered, we know that the Linear A corpus provides interesting information concerning economy, commerce and religion.",
"As mentioned in Subsection 2.1, Schoep (2002) made a critical assessment of the Linear A tablets and their role in the administrative process, studying the physical supports.",
"Ruth Palmer (1995) made a deep study of commodities distributions (listing precise quantities and places) among Minoan centers, even without a full understanding of documents contents.",
"As Palmer points out, 'the ideograms for basic commodities, and the formats of the Linear A texts are similar enough to their Linear B counterparts to allow valid comparison of the types and amounts of commodities which appear in specific contexts'.",
"So, it's possible to have 'an idea of the focus of the economy' and of 'the scale and complexity of the transactions'.",
"From the linear A tablets, we can infer information about the resources management and administration system of Minoan centers.",
"Van den Kerkhof and Rem (2007) analyzed the Minoan libation formulas: religious inscriptions on cups, ladles and tables that were used in the offerings of oil and other powerful drinks at dawn.",
"The priestesses that carried out the Minoan libation ritual used all kinds of utensils, and they often inscribed their sacred formulas onto these objects.",
"Around thirty of these texts have survived (whole or in part) on libation tables, ladles and vases, written in various kinds of handwriting.",
"Transcripts of these religious inscriptions are available from Consani et al.",
"(1999) and from John G. Younger (2000m) on his website.",
"As noticed by Duhoux (1989) the Minoan libation formulas have a fixed structure with variable elements.",
"In fact, some studies (Davis, 2014) about Minoan syntax have been made by observing the sign groups order found in these regular formulas.",
"More importantly, the presence of olive-like ideograms could tell us that the Minoans used olive oil for libation (Van den Kerkhof and Rem, 2007) .",
"Beyond all these parts of the Minoan cultural heritage already available, a huge part is preserved there too: the Minoan language, with its hidden stories reflecting the life of a civilization.",
"We hope that our contribution can be useful to the community and that the Minoan, in its digital form, may finally be deciphered through computational approaches.",
"Future Work We are working on XSL style sheets in order to create suitable HTML pages.",
"All the data will be freely available and published at the following URL: http://ling.ied.edu.HK/ gregoire/lineara.",
"A further step will be developing a web interface to annotate, and dynamically enrich the corpus information."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"2.3",
"2.4",
"2.5",
"2.6",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"3.5",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"4.5",
"5",
"6"
],
"paper_header_content": [
"Introduction to Linear A and Minoan",
"Linear A available resources",
"Linear A documents",
"Godart and Olivier's Collection of Linear A Inscriptions",
"John G. Younger's website",
"GORILA symbols catalogue",
"Linear A Font",
"Unicode Linear A Characters Set",
"Corpus data format",
"Why Digital?",
"Why Unicode?",
"Why XML?",
"Why EpiDoc?",
"EpiDoc structure",
"Developing the Linear A Corpus",
"Data Collection",
"Segmentation Issues",
"Obtaining Unicode transcriptions",
"XML annotation",
"A new Linear A font",
"The Linear A Digital Corpus as cultural resource",
"Future Work"
]
} | GEM-SciDuet-train-29#paper-1038#slide-9 | Corpus data format example | Petrolito, Winterstein, Perono Cacciafoco Linear A Corpus 30 July 2015 | Petrolito, Winterstein, Perono Cacciafoco Linear A Corpus 30 July 2015 | [] |
GEM-SciDuet-train-29#paper-1038#slide-10 | 1038 | Minoan linguistic resources: The Linear A Digital Corpus | This paper describes the Linear A/Minoan digital corpus and the approaches we applied to develop it. We aim to set up a suitable study resource for Linear A and Minoan. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168
],
"paper_content_text": [
"Firstly we start by introducing Linear A and Minoan in order to make it clear why we should develop a digital marked up corpus of the existing Linear A transcriptions.",
"Secondly we list and describe some of the existing resources about Linear A: Linear A documents (seals, statuettes, vessels etc.",
"), the traditional encoding systems (standard code numbers referring to distinct symbols), a Linear A font, and the newest (released on June 16th 2014) Unicode Standard Characters set for Linear A. Thirdly we explain our choice concerning the data format: why we decided to digitize the Linear A resources; why we decided to convert all the transcriptions in standard Unicode characters; why we decided to use an XML format; why we decided to implement the TEI-EpiDoc DTD.",
"Lastly we describe: the developing process (from the data collection to the issues we faced and the solving strategies); a new font we developed (synchronized with the Unicode Characters Set) in order to make the data readable even on systems that are not updated.",
"Finally, we discuss the corpus we developed in a Cultural Heritage preservation perspective and suggest some future works.",
"Introduction to Linear A and Minoan Linear A is the script used by the Minoan Civilization (Cotterell, 1980) from 2500 to 1450 BC.",
"Writing system Time span Cretan Hieroglyphic 2100 -1700 BC Linear A 2500 -1450 BC Linear B 1450 -1200 BC The Minoan Civilization arose on the island of Crete in the Aegean Sea during the Bronze Age.",
"Minoan ruins and artifacts have been found mainly in Crete but also in other Greek islands and in mainland Greece, in Bulgaria, in Turkey and in Israel.",
"Linear A is not used anymore and, even after decades of studies (it was discovered by Sir Arthur Evans around 1900 (Evans, 1909) ), it still remains undeciphered.",
"All the assumptions and hypotheses made about Linear A and Minoan (its underlying language) are mainly based on the comparison with the well known Linear B, the famous child system originated by Linear A.",
"In fact, Linear B was fully deciphered during the 1950s by Michael Ventris 1 and was found to encode an ancient Greek dialect used by the Mycenaean civilization.",
"Archaeologist Arthur Evans named the script 'Linear' because it consisted just of lines inscribed in clay (Robinson, 2009) There have been several attempts to decipher Linear A and the Minoan Language.",
"We can divide the underlying hypotheses in six groups: Greek-like language (Nagy, 1963) , distinct Indo-European branch (Owens, 1999) , Anatolian language close to Luwian (Palmer, 1958) , archaic form of Phoenician (Dietrich and Loretz, 2001) , Indo-Iranian (Faure, 1998) and Etruscan-like language (Giulio M. Facchetti and Negri, 2003) .",
"There is also an interesting attempt (Younger, 2000b) to decipher single words, specifically toponyms, by applying Linear B phonetic values to the symbols shared by both Linear A and Linear B and following the assumption that toponyms are much more likely to survive as loans in Mycenaean Greek (written in Linear B); we show an example of this approach in Table 2.",
"In the next sections we describe the available existing resources concerning Linear A and the Linear A Digital Corpus: why and how we developed it.",
"Linear A available resources Even if Linear A and Linear B were discovered more than one century ago, Linear A has not been deciphered yet.",
"Nevertheless, many scholars worked on collecting and organizing all the available data in order to study and to decipher the script and the language.",
"Probably due to the fact that only historical linguists, philologists and archaeologists attempted to collect and organize all the existing data, nowadays a rich and well organized digital corpus is still not available.",
"In this section we describe all the available Linear A resources, including both physical documents and digital data.",
"Table 3 : Indexed types of support (Younger, 2000e) .",
"Linear A documents Linear A was written on a variety of media, such as stone offering tables, gold and silver hair pins, and pots (inked and inscribed).",
"The clay documents consist of tablets, roundels, and sealings (one-hole, two-hole, and flat-based).",
"Roundels are related to a \"conveyance of a commodity, either within the central administration or between the central administration and an external party\" (Palmer, 1995; Schoep, 2002) .",
"The roundel is the record of this transaction that stays within the central administration as the commodity moves out of the transacting bureau (Hallager, 1996) .",
"Two-hole sealings probably dangled from commodities brought into the center; onehole sealings apparently dangled from papyrus/parchment documents; flat-based sealings (themselves never inscribed) were pressed against the twine that secured papyrus/parchment documents (Younger, 2000g; Schoep, 2002) as shown by photographs (Müller, 1999) , (Müller, 2002) of the imprints that survive on the underside of flat-based sealings.",
"There are 1,427 Linear A documents containing 7,362-7,396 signs, much less than the quantity of data we have for Linear B (more than 4,600 documents containing 57,398 signs) (Younger, 2000f) .",
"Godart and Olivier's Collection of Linear A Inscriptions There is a complete and organized collection of Linear A documents on a paper corpus, the GORILA Louis Godart and Jean-Pierre Olivier, Recueil des inscriptions en Linéaire A (Godart and Olivier, 1976) .",
"Godart and Olivier have indexed the documents by original location and type of support, following the Raison-Pope Index (Raison and Pope, 1971) .",
"For example, the document AP Za 1 is from AP = Apodoulou and the support type is Za = stone vessels as shown in Table 3 .",
"Younger (2000h) provides a map with all the Cretan sites and one with all the Greek non-Cretan sites (Younger, 2000i) .",
"Godart and Olivier also provide referential data about conservation places (mainly museums), and periodization (for example: EM II = Second Early Minoan).",
"Since 1976, this has been the main source of data and point of reference about Linear A documents and it has set up the basis for further studies.",
"Even recent corpora, such as the Corpus transnuméré du linéaire A (Raison and Pope, 1994) , always refer to GORILA precise volumes and pages describing each document.",
"John G. Younger's website Younger (2000j) has published a website that is the best digital resource available (there is another interesting project, never completed, on Yannis Deliyannis's website 2 ).",
"It collects most of the existing inscriptions (taking GORILA as main source of data and point of reference) transcribed as Linear B phonetic values (like the KU-NI-SU transcription above).",
"The transcriptions are kept up to date and a complete restructuring in June 2015 has been announced (Younger, 2000j) .",
"GORILA symbols catalogue Many transcription systems have been defined.",
"The first one has been proposed by Raison and Pope (1971) and uses a string composed by one or two characters (Lm, L or Lc depending on the symbol, respectively metric, phonetic or compound) followed by a number, for example: L2.",
"This system has been widely used by many scholars such as David Woodley Packard (president of the Packard Humanities Institute 3 ), Colin Renfrew and Richard Janko (Packard, 1974; Renfrew, 1977; Janko, 1982) .",
"The second one, used in the GORILA collection (Godart and Olivier, 1976 ) and on John G. Younger's website, consists of a string composed by one or two characters (AB if the symbol is shared by Linear A and Linear B, A if the symbol is only used in Linear A) followed by a number and eventually other alphabetical characters (due to addenda and corrigenda to earlier versions), for example: AB03.",
"Many scholars transcribe the symbols shared by Linear A and B with the assumed phonetical/syllabic transcription.",
"This syllabic transcription is based on the corresponding Linear B phonetic values.",
"Younger (2000a) provides a conversion table of Pope and Raison's transcription system, GO-RILA's transcription system and his own phonetic/syllabic transcription system.",
"Developing our corpus, we worked mainly on Younger's syllabic and GORILA transcriptions, because the Unicode Linear A encoding is broadly based on the GORILA catalogue, which is also the basic set of characters used in decipherment efforts 4 .",
"We provide an example of different transcriptions for the same symbol in Table 4 .",
"As can be noticed, the Unicode encoding is based on the GORILA transcription system.",
"Linear A Font The best Linear A Font available is LA.ttf, released by D.W. Borgdorff 5 in 2004.",
"In this font some arbitrary Unicode positions for Latin characters are mapped to Linear A symbols.",
"On one hand this allows the user to type Linear A symbols directly by pressing the keys on the keyboard; on the other hand, only transliterations can be produced.",
"The text eventually typed internally will be a series of Latin characters.",
"It should be remarked that this font would not be useful to make readable a Linear A corpus that is non-translittered and encoded in Unicode.",
"Unicode Linear A Characters Set On June 16th 2014, Version 7.0 of Unicode standard was released 6 , adding 2,834 new characters and including, finally, the Linear A character set.",
"Linear A block has been set in the range 10600-1077F and the order mainly follows GORILA's one 7 , as seen in Table 4 .",
"This Unicode Set covers simple signs, vase shapes, complex signs, complex signs with vase shapes, fractions and compound fractions.",
"This is a resource that opens, for the first time, the possibility to develop a Linear A digital corpus not consisting of a transliteration or alternative transcription.",
"Corpus data format Many scholars have faced the issues for data curation and considered various possibilities.",
"Among all the possible solutions, we chose to develop the Linear A Digital Corpus as a collection of TEI-EpiDoc XML documents.",
"In this section we explain why.",
"Why Digital?",
"Many epigraphic corpora have begun to be digitalized; there are many reasons to do so.",
"A digital corpus can include several representations of the inscriptions (Mahoney, 2007) : • pictures of the original document; • pictures of drawings or transcriptions made by hand simplifying the document; • diplomatic transcriptions; • edited texts; • translations; • commentaries.",
"Building a database is enough to get much richer features than the ones a paper corpus would provide.",
"The most visible feature of an epigraphic database is its utility as an Index Universalis (Gómez Pantoja and Álvarez, 2011); unlike hand-made indexes, there is no need to constrain the number of available search-keys.",
"Needless to say, the opportunity to have the data available also on the web is valuable.",
"Why Unicode?",
"Text processing must also take into account the writing systems represented in the corpus.",
"If the corpus consists of inscriptions written in the Latin alphabet, then the writing system of the inscriptions is the same as that of the Western European modern languages used for meta-data, translations, and commentaries.",
"In our case, unluckily, we have to deal with Linear A, so we need to find a way to represent our text.",
"Scholars objected to epigraphic databases on the ground of its poor graphic ability to represent non-Latin writing systems (García Barriocanal et al., 2011) .",
"This led to the use of non-standard fonts in some databases which probed to be a bad move, compromising overall compatibility and system upgrading.",
"This approach is appealing because if the corpus needs to be printed, sooner or later fonts will be a need in all cases.",
"The font-based solution assumes that all the software involved can recognize font-change markers.",
"Unluckily, some Database Management Systems (DMSs) do not allow changes of font within a text field and some export or interchange formats lose font information.",
"When the scripts of the corpus are all supported, which will be the case for any script still used by a living language, Unicode is a better approach.",
"Despite Minoan not being a living language, Linear A is finally part of the Unicode 7.0 Character Code Charts 8 but some sign groups conventionally interpreted as numbers have no Unicode representation.",
"Why XML?",
"Until not so long ago, markup systems have always involved special typographical symbols in the text-brackets, underdots, and so on.",
"Some epigraphers see XML as a natural transformation of what they have always done, with all the additional benefits that come from standardization within the community.",
"There is a growing consensus that XML is the best way to encode text.",
"Some corpora may also use the typographical marks of the Leiden system, which has the advan-<glyph xml:id=\"n5\"> <glyphName> Number 5 </glyphName> <mapping type=\"standardized\"> 5 </mapping> </glyph> tage of being entirely familiar to the epigraphers who create and maintain the corpus.",
"Unfortunately, the special brackets, underdots, and other typographical devices may not be supported by the character set of the computer system to be used.",
"A key incentive for using XML is the ability to exchange data with other projects.",
"It is convenient to be able to divide the information in many layers: cataloging, annotating, commenting and editing the inscriptions.",
"In some cases, merging different layers from different projects could be a need (for example when each of these projects is focused on a specific layer, for which provides the best quality), as a consequence the resulting data should be in compatible forms.",
"If the projects use the same Document Type Definition (DTD), in the same way, this is relatively easy.",
"While corpora that store their texts as wordprocessor files with Leiden markup can also share data, they must agree explicitly on the details of text layout, file formats, and character encodings.",
"With XML, it is possible to define either elements or entities for unsupported characters.",
"This feature is particularly interesting in our case, giving a solution for the numbers representation (Linear A numbers, except for fractions, have no Unicode representation).",
"Suppose you want to mark up the sign group , conventionally interpreted as the number 5, in the XML.",
"As specified in the TEI DTD, this could be expressed as <g ref=\"#n5\"/>, where the element g indicates a glyph, or a non-standard character and the attribute value points to the element glyph, which contains information about the specific glyph.",
"An example is given in Figure 1 .",
"Alternatively, the project might define an entity to represent this character.",
"Either way, the XML text notes that there is the Linear A number 5, and the later rendering of the text for display or printing can substitute the appropriate character in a known font, a picture of the character, or even a numeral from a different system.",
"Such approaches assume that tools are available for these conversions; some application, transformation, or stylesheet must have a way to know how to interpret the given element or entity.",
"The usage of XML provides two advantages: in first place, it makes possible the encoding of the characters that occur in the text (as shown above); in second place, it's really useful for encoding meta-information.",
"Why EpiDoc?",
"If a project decides to use XML, the most appropriate DTD (or schema) to be used needs to be chosen.",
"As in every other humanities discipline, the basic question is whether to use a general DTD, like the TEI, or to write a project-specific one.",
"Some projects need DTDs that are extremely specific to the types of inscriptions they are dealing with, instead other projects prefer to rely on existing, widely used DTDs.",
"Mahoney (2007) has deeply analyzed all the digitization issues, taking into account all the advantages and disadvantages of different approaches; her conclusion is that it's best to use EpiDoc 9 an XML encoding tool that could be also used to write structured documents compliant with the TEI standard 10 .",
"The EpiDoc DTD is the TEI, with a few epigraphically oriented customizations made using the standard TEI mechanisms.",
"Rather than writing a DTD for epigraphy from scratch, the Epi-Doc group uses the TEI because TEI has already addressed many of the taxonomic and semantic challenges faced by epigraphers, because the TEIusing community can provide a wide range of best-practice examples and guiding expertise, and because existing tooling built around TEI could easily lead to early and effective presentation and use of TEI-encoded epigraphic texts (Mahoney, 2007) .",
"The TEI and EpiDoc approaches have already been adopted by several epigraphic projects (Bodard, 2009 ), such as the Dêmos project (Furman University) and the corpus of Macedonian and Thracian inscriptions being compiled at KERA, the Research Center for Greek and Roman Antiquity at Athens (Mahoney, 2007) .",
"Also other scholars evaluate EpiDoc as a suitable choice.",
"Felle (2011) compares the EAGLE (Electronic Archive of Greek and Latin Epigraphy 11 ) project with the EpiDoc existing resources, viewing these resources as different but complementary.",
"Álvarez et al.",
"(2010) and Gómez Pantoja and Álvarez (2011) discuss the possibility of sharing Epigraphic Information as EpiDoc-based Linked Data and describe how they implemented a relational-to-linked data solution for the Hispania Epigraphica database.",
"Cayless (2003) evaluates EpiDoc as a relevant digital tool for Epigraphy allowing for a uniform representation of epigraphic metadata.",
"The EpiDoc guidelines are emerging as one standard for digital epigraphy with the TEI.",
"EpiDoc is not the only possible way to use the TEI for epigraphic texts but the tools, documentation, and examples 12 make it a good environment for new digitization projects as ours.",
"EpiDoc structure An EpiDoc document is structured as a standard TEI document with the teiHeader element including some initial Desc sections (fileDesc, encodingDesc, profileDesc, revisionDesc, etc) containing metadata, general information and descriptions (here we annotated place, period, kind of support and specific objects/fragments IDs).",
"An interesting use of encodingDesc is shown in Figure 1 above: the gliph element has to be defined inside its parent element charDecl and its grandparent element encodingDesc.",
"The teiHeader element is followed by the text element including the body element composed by a series of unnumbered <div>s, distinguished by their type attributes (we show an example of the Epidoc <div> element in Figure 2 ).",
"Typical divisions might include: • text itself (type=\"edition\"); • translation (type=\"translation\"); 11 http://www.eagle-eagle.it/ 12 http://wiki.tei-c.org/index.php/ Samples_of_TEI_texts • description (type=\"description\"; • commentary (type=\"commentary\"); • historical information(type=\"history\"); • bibliography (type=\"bibliography\").",
"<div lang=\"minoan\" n=\"text\" type=\"edition\" part=\"N\" sample=\"complete\" org=\"uniform\"> <head lang=\"eng\">Edition</head> <cb rend=\"front\" n=\"HM 1673\"/> <ab part=\"N\"> <lb n=\"1\"/> <w part=\"N\"> </w> <space dim=\"horizontal\" extent=\"1em\" unit=\"character\"/> <w part=\"N\"> </w> <lb n=\"2\"/> <w part=\"N\"> </w> <g ref=\"#n5\"/> <w part=\"N\"> </w> <lb n=\"3\"/> <w part=\"N\"> </w> <g ref=\"#n12\"/> <w part=\"N\"> </w> <lb n=\"4\"/> <w part=\"N\"> </w> <g ref=\"#n6\"/> <lb n=\"5\"/> <w part=\"N\"> </w> <lb n=\"6\"/> <g ref=\"#n4\"/> <w part=\"N\"> </w> <supplied reason=\"damage\"> </supplied> <gap extent=\"2em\" reason=\"lost\" unit=\"character\" dim=\"right\"/> </ab> </div> The EpiDoc DTD introduces a finite set of possible values for the type of a <div>, so that there is a clear distinction between sections covering different aspects, such as the commentary, the description or the archaeological history.",
"One advantage of structured markup is that editors can encode more information about how certain a particular feature is.",
"The date of an inscription, for example, can be encoded as a range of possible dates.",
"EpiDoc includes the TEI <certainty> element and the cert attribute to encourage editors to say whether or not they are completely confident of a given reading.",
"After some discussion, the EpiDoc community (Mahoney, 2007) decided that certainty should be expressed as a yes-or-no value: either the editor is certain of the reading, or not.",
"Gradual certainty is too complicated to manage and is best explained in the commentary.",
"Developing the Linear A Corpus The hope that computational approaches could help decipher Linear A, along with the evident lack of rich digital resources in this field, led us to develop this new resource.",
"In this section we describe which issues we faced and which solving strategies we used.",
"Data Collection Luckily the existence of Younger's website and GORILA volumes, together with the Raison-Pope Index, made possible a semi-automatic collection process, starting from syllabic transcriptions taken from Younger's website (with his permission), converting them in Unicode strings through Python scripts and acquiring all the metadata provided in Younger's transcriptions (location and support IDs, conservation place, periodization etc.).",
"Younger's resources on his website consist of two HTML pages, one containing inscriptions from Haghia Triada (that is the richest location in terms of documents found there) (Younger, 2000k) and the other containing documents from all the other locations (Younger, 2000l ).",
"Younger's transcriptions are well enriched with metadata.",
"The metadata convey the same information found in GORILA, including the Raison-Pope Index, plus some additional description of the support (this was not necessary in GORILA volumes, where the transcriptions are shown just next to the documents pictures) and the reference to the specific GORILA volume and pages.",
"Segmentation Issues When working on ancient writing systems, segmentation issues are expected to come up.",
"John G. Younger explains (Younger, 2000c ) that in Linear A separation is mainly indicated in two ways: first, by associating sign groups with numbers or logograms, thereby implying a separation; second, by placing a dot between two sign groups, thereby explicitly separating the sign groups or between a sign group and some other sign like a transaction sign or a logogram.",
"Younger also explains that in texts that employ a string of sign groups, dots are used to separate them and this practice is most notable on non-bureaucratic texts and especially in religious texts.",
"On his website, Younger also covers the hyphenization issue (Younger, 2000d) , explaining that in some cases we find a split across lines and the reason may involve separating prefixes from base words (the root of a sign group) or base words from their suffixes.",
"As Younger points out, this hypothesis would require evidence showing that affixes are involved.",
"The hyphenization issue is more complex to solve because a 'neutral' resource should avoid transcriptions implying a well known segmentation for Linear A sign groups.",
"In Younger's transcriptions, split sign groups are reunified in order to make it clearer when a known sign group is there.",
"Instead, our digital collection keeps the text as it is on the document, all the information about interpretations of such kind can be stored separately.",
"Obtaining Unicode transcriptions We managed to obtain Unicode encoded transcriptions by automatically converting Younger's phonetic transcriptions to GORILA transcriptions (manually checked against GORILA volumes) and then by automatically converting GORILA transcriptions to Unicode codes and printing them as Unicode characters (UTF-8 encoding).",
"In order to create the syllables-to-GORILA and the GORILA-to-Unicode dictionaries, we took into account Younger's conversion table mentioned in Subsection 2.4 and the official Unicode documentation (containing explicit Unicode-to-GORILA mapping information).",
"All these processing steps have been implemented through Python scripts.",
"XML annotation Once collected the whole corpus encoded in Unicode, we automatically added part of the XML annotation through a python script.",
"These documents have been later manually corrected and completed, checking against GORILA volumes.",
"A new Linear A font Before the Unicode 7.0 release, there was no way to visualize Unicode characters in the range 10600-1077F.",
"Even now, systems that are not updated may have trouble to visualize those characters.",
"Some implementations for Unicode support in certain contexts (for example for L A T E X's output) are not always up-to-date, so it is not obvious that the fonts for the most recent characters sets are available.",
"We decided to develop a new Linear A font, solving the main issue found in LA.ttf (wrong Unicode positions).",
"Starting from the official Unicode documentation, we created a set of symbols graphically similar to the official ones and aligned them to the right Unicode positions.",
"We decided to name the font John_Younger.ttf to show our appreciation for Younger's work.",
"He made the results of GORILA available to a wider public on digital media; this is the same goal we want to pursue by developing and distributing this font.",
"We released the font file at the following URL: http://openfontlibrary.",
"org/en/font/john-younger.",
"The Linear A Digital Corpus as cultural resource As stated by European Commission (2015) and UNESCO (2003) , the meaning of the notion of cultural heritage does not apply just to material objects and works of art, but also to 'intangible cultural heritage', as traditions and creative expressions.",
"In this perspective, linguistic corpora fit perfectly this definition; in fact, they contain information about tradition, knowledge and lifestyle of a certain culture.",
"Despite the fact that the Minoan language has not yet been deciphered, we know that the Linear A corpus provides interesting information concerning economy, commerce and religion.",
"As mentioned in Subsection 2.1, Schoep (2002) made a critical assessment of the Linear A tablets and their role in the administrative process, studying the physical supports.",
"Ruth Palmer (1995) made a deep study of commodities distributions (listing precise quantities and places) among Minoan centers, even without a full understanding of documents contents.",
"As Palmer points out, 'the ideograms for basic commodities, and the formats of the Linear A texts are similar enough to their Linear B counterparts to allow valid comparison of the types and amounts of commodities which appear in specific contexts'.",
"So, it's possible to have 'an idea of the focus of the economy' and of 'the scale and complexity of the transactions'.",
"From the linear A tablets, we can infer information about the resources management and administration system of Minoan centers.",
"Van den Kerkhof and Rem (2007) analyzed the Minoan libation formulas: religious inscriptions on cups, ladles and tables that were used in the offerings of oil and other powerful drinks at dawn.",
"The priestesses that carried out the Minoan libation ritual used all kinds of utensils, and they often inscribed their sacred formulas onto these objects.",
"Around thirty of these texts have survived (whole or in part) on libation tables, ladles and vases, written in various kinds of handwriting.",
"Transcripts of these religious inscriptions are available from Consani et al.",
"(1999) and from John G. Younger (2000m) on his website.",
"As noticed by Duhoux (1989) the Minoan libation formulas have a fixed structure with variable elements.",
"In fact, some studies (Davis, 2014) about Minoan syntax have been made by observing the sign groups order found in these regular formulas.",
"More importantly, the presence of olive-like ideograms could tell us that the Minoans used olive oil for libation (Van den Kerkhof and Rem, 2007) .",
"Beyond all these parts of the Minoan cultural heritage already available, a huge part is preserved there too: the Minoan language, with its hidden stories reflecting the life of a civilization.",
"We hope that our contribution can be useful to the community and that the Minoan, in its digital form, may finally be deciphered through computational approaches.",
"Future Work We are working on XSL style sheets in order to create suitable HTML pages.",
"All the data will be freely available and published at the following URL: http://ling.ied.edu.HK/ gregoire/lineara.",
"A further step will be developing a web interface to annotate, and dynamically enrich the corpus information."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"2.3",
"2.4",
"2.5",
"2.6",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"3.5",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"4.5",
"5",
"6"
],
"paper_header_content": [
"Introduction to Linear A and Minoan",
"Linear A available resources",
"Linear A documents",
"Godart and Olivier's Collection of Linear A Inscriptions",
"John G. Younger's website",
"GORILA symbols catalogue",
"Linear A Font",
"Unicode Linear A Characters Set",
"Corpus data format",
"Why Digital?",
"Why Unicode?",
"Why XML?",
"Why EpiDoc?",
"EpiDoc structure",
"Developing the Linear A Corpus",
"Data Collection",
"Segmentation Issues",
"Obtaining Unicode transcriptions",
"XML annotation",
"A new Linear A font",
"The Linear A Digital Corpus as cultural resource",
"Future Work"
]
} | GEM-SciDuet-train-29#paper-1038#slide-10 | Unsupported glyphs handling | Inside the EncodingDesc>CharDecl elements, glyph elements can be defined
g elements referring to glyphs can be used to represent unsupported
Petrolito, Winterstein, Perono Cacciafoco Linear A Corpus 30 July 2015 | Inside the EncodingDesc>CharDecl elements, glyph elements can be defined
g elements referring to glyphs can be used to represent unsupported
Petrolito, Winterstein, Perono Cacciafoco Linear A Corpus 30 July 2015 | [] |
GEM-SciDuet-train-29#paper-1038#slide-11 | 1038 | Minoan linguistic resources: The Linear A Digital Corpus | This paper describes the Linear A/Minoan digital corpus and the approaches we applied to develop it. We aim to set up a suitable study resource for Linear A and Minoan. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168
],
"paper_content_text": [
"Firstly we start by introducing Linear A and Minoan in order to make it clear why we should develop a digital marked up corpus of the existing Linear A transcriptions.",
"Secondly we list and describe some of the existing resources about Linear A: Linear A documents (seals, statuettes, vessels etc.",
"), the traditional encoding systems (standard code numbers referring to distinct symbols), a Linear A font, and the newest (released on June 16th 2014) Unicode Standard Characters set for Linear A. Thirdly we explain our choice concerning the data format: why we decided to digitize the Linear A resources; why we decided to convert all the transcriptions in standard Unicode characters; why we decided to use an XML format; why we decided to implement the TEI-EpiDoc DTD.",
"Lastly we describe: the developing process (from the data collection to the issues we faced and the solving strategies); a new font we developed (synchronized with the Unicode Characters Set) in order to make the data readable even on systems that are not updated.",
"Finally, we discuss the corpus we developed in a Cultural Heritage preservation perspective and suggest some future works.",
"Introduction to Linear A and Minoan Linear A is the script used by the Minoan Civilization (Cotterell, 1980) from 2500 to 1450 BC.",
"Writing system Time span Cretan Hieroglyphic 2100 -1700 BC Linear A 2500 -1450 BC Linear B 1450 -1200 BC The Minoan Civilization arose on the island of Crete in the Aegean Sea during the Bronze Age.",
"Minoan ruins and artifacts have been found mainly in Crete but also in other Greek islands and in mainland Greece, in Bulgaria, in Turkey and in Israel.",
"Linear A is not used anymore and, even after decades of studies (it was discovered by Sir Arthur Evans around 1900 (Evans, 1909) ), it still remains undeciphered.",
"All the assumptions and hypotheses made about Linear A and Minoan (its underlying language) are mainly based on the comparison with the well known Linear B, the famous child system originated by Linear A.",
"In fact, Linear B was fully deciphered during the 1950s by Michael Ventris 1 and was found to encode an ancient Greek dialect used by the Mycenaean civilization.",
"Archaeologist Arthur Evans named the script 'Linear' because it consisted just of lines inscribed in clay (Robinson, 2009) There have been several attempts to decipher Linear A and the Minoan Language.",
"We can divide the underlying hypotheses in six groups: Greek-like language (Nagy, 1963) , distinct Indo-European branch (Owens, 1999) , Anatolian language close to Luwian (Palmer, 1958) , archaic form of Phoenician (Dietrich and Loretz, 2001) , Indo-Iranian (Faure, 1998) and Etruscan-like language (Giulio M. Facchetti and Negri, 2003) .",
"There is also an interesting attempt (Younger, 2000b) to decipher single words, specifically toponyms, by applying Linear B phonetic values to the symbols shared by both Linear A and Linear B and following the assumption that toponyms are much more likely to survive as loans in Mycenaean Greek (written in Linear B); we show an example of this approach in Table 2.",
"In the next sections we describe the available existing resources concerning Linear A and the Linear A Digital Corpus: why and how we developed it.",
"Linear A available resources Even if Linear A and Linear B were discovered more than one century ago, Linear A has not been deciphered yet.",
"Nevertheless, many scholars worked on collecting and organizing all the available data in order to study and to decipher the script and the language.",
"Probably due to the fact that only historical linguists, philologists and archaeologists attempted to collect and organize all the existing data, nowadays a rich and well organized digital corpus is still not available.",
"In this section we describe all the available Linear A resources, including both physical documents and digital data.",
"Table 3 : Indexed types of support (Younger, 2000e) .",
"Linear A documents Linear A was written on a variety of media, such as stone offering tables, gold and silver hair pins, and pots (inked and inscribed).",
"The clay documents consist of tablets, roundels, and sealings (one-hole, two-hole, and flat-based).",
"Roundels are related to a \"conveyance of a commodity, either within the central administration or between the central administration and an external party\" (Palmer, 1995; Schoep, 2002) .",
"The roundel is the record of this transaction that stays within the central administration as the commodity moves out of the transacting bureau (Hallager, 1996) .",
"Two-hole sealings probably dangled from commodities brought into the center; onehole sealings apparently dangled from papyrus/parchment documents; flat-based sealings (themselves never inscribed) were pressed against the twine that secured papyrus/parchment documents (Younger, 2000g; Schoep, 2002) as shown by photographs (Müller, 1999) , (Müller, 2002) of the imprints that survive on the underside of flat-based sealings.",
"There are 1,427 Linear A documents containing 7,362-7,396 signs, much less than the quantity of data we have for Linear B (more than 4,600 documents containing 57,398 signs) (Younger, 2000f) .",
"Godart and Olivier's Collection of Linear A Inscriptions There is a complete and organized collection of Linear A documents on a paper corpus, the GORILA Louis Godart and Jean-Pierre Olivier, Recueil des inscriptions en Linéaire A (Godart and Olivier, 1976) .",
"Godart and Olivier have indexed the documents by original location and type of support, following the Raison-Pope Index (Raison and Pope, 1971) .",
"For example, the document AP Za 1 is from AP = Apodoulou and the support type is Za = stone vessels as shown in Table 3 .",
"Younger (2000h) provides a map with all the Cretan sites and one with all the Greek non-Cretan sites (Younger, 2000i) .",
"Godart and Olivier also provide referential data about conservation places (mainly museums), and periodization (for example: EM II = Second Early Minoan).",
"Since 1976, this has been the main source of data and point of reference about Linear A documents and it has set up the basis for further studies.",
"Even recent corpora, such as the Corpus transnuméré du linéaire A (Raison and Pope, 1994) , always refer to GORILA precise volumes and pages describing each document.",
"John G. Younger's website Younger (2000j) has published a website that is the best digital resource available (there is another interesting project, never completed, on Yannis Deliyannis's website 2 ).",
"It collects most of the existing inscriptions (taking GORILA as main source of data and point of reference) transcribed as Linear B phonetic values (like the KU-NI-SU transcription above).",
"The transcriptions are kept up to date and a complete restructuring in June 2015 has been announced (Younger, 2000j) .",
"GORILA symbols catalogue Many transcription systems have been defined.",
"The first one has been proposed by Raison and Pope (1971) and uses a string composed by one or two characters (Lm, L or Lc depending on the symbol, respectively metric, phonetic or compound) followed by a number, for example: L2.",
"This system has been widely used by many scholars such as David Woodley Packard (president of the Packard Humanities Institute 3 ), Colin Renfrew and Richard Janko (Packard, 1974; Renfrew, 1977; Janko, 1982) .",
"The second one, used in the GORILA collection (Godart and Olivier, 1976 ) and on John G. Younger's website, consists of a string composed by one or two characters (AB if the symbol is shared by Linear A and Linear B, A if the symbol is only used in Linear A) followed by a number and eventually other alphabetical characters (due to addenda and corrigenda to earlier versions), for example: AB03.",
"Many scholars transcribe the symbols shared by Linear A and B with the assumed phonetical/syllabic transcription.",
"This syllabic transcription is based on the corresponding Linear B phonetic values.",
"Younger (2000a) provides a conversion table of Pope and Raison's transcription system, GO-RILA's transcription system and his own phonetic/syllabic transcription system.",
"Developing our corpus, we worked mainly on Younger's syllabic and GORILA transcriptions, because the Unicode Linear A encoding is broadly based on the GORILA catalogue, which is also the basic set of characters used in decipherment efforts 4 .",
"We provide an example of different transcriptions for the same symbol in Table 4 .",
"As can be noticed, the Unicode encoding is based on the GORILA transcription system.",
"Linear A Font The best Linear A Font available is LA.ttf, released by D.W. Borgdorff 5 in 2004.",
"In this font some arbitrary Unicode positions for Latin characters are mapped to Linear A symbols.",
"On one hand this allows the user to type Linear A symbols directly by pressing the keys on the keyboard; on the other hand, only transliterations can be produced.",
"The text eventually typed internally will be a series of Latin characters.",
"It should be remarked that this font would not be useful to make readable a Linear A corpus that is non-translittered and encoded in Unicode.",
"Unicode Linear A Characters Set On June 16th 2014, Version 7.0 of Unicode standard was released 6 , adding 2,834 new characters and including, finally, the Linear A character set.",
"Linear A block has been set in the range 10600-1077F and the order mainly follows GORILA's one 7 , as seen in Table 4 .",
"This Unicode Set covers simple signs, vase shapes, complex signs, complex signs with vase shapes, fractions and compound fractions.",
"This is a resource that opens, for the first time, the possibility to develop a Linear A digital corpus not consisting of a transliteration or alternative transcription.",
"Corpus data format Many scholars have faced the issues for data curation and considered various possibilities.",
"Among all the possible solutions, we chose to develop the Linear A Digital Corpus as a collection of TEI-EpiDoc XML documents.",
"In this section we explain why.",
"Why Digital?",
"Many epigraphic corpora have begun to be digitalized; there are many reasons to do so.",
"A digital corpus can include several representations of the inscriptions (Mahoney, 2007) : • pictures of the original document; • pictures of drawings or transcriptions made by hand simplifying the document; • diplomatic transcriptions; • edited texts; • translations; • commentaries.",
"Building a database is enough to get much richer features than the ones a paper corpus would provide.",
"The most visible feature of an epigraphic database is its utility as an Index Universalis (Gómez Pantoja and Álvarez, 2011); unlike hand-made indexes, there is no need to constrain the number of available search-keys.",
"Needless to say, the opportunity to have the data available also on the web is valuable.",
"Why Unicode?",
"Text processing must also take into account the writing systems represented in the corpus.",
"If the corpus consists of inscriptions written in the Latin alphabet, then the writing system of the inscriptions is the same as that of the Western European modern languages used for meta-data, translations, and commentaries.",
"In our case, unluckily, we have to deal with Linear A, so we need to find a way to represent our text.",
"Scholars objected to epigraphic databases on the ground of its poor graphic ability to represent non-Latin writing systems (García Barriocanal et al., 2011) .",
"This led to the use of non-standard fonts in some databases which probed to be a bad move, compromising overall compatibility and system upgrading.",
"This approach is appealing because if the corpus needs to be printed, sooner or later fonts will be a need in all cases.",
"The font-based solution assumes that all the software involved can recognize font-change markers.",
"Unluckily, some Database Management Systems (DMSs) do not allow changes of font within a text field and some export or interchange formats lose font information.",
"When the scripts of the corpus are all supported, which will be the case for any script still used by a living language, Unicode is a better approach.",
"Despite Minoan not being a living language, Linear A is finally part of the Unicode 7.0 Character Code Charts 8 but some sign groups conventionally interpreted as numbers have no Unicode representation.",
"Why XML?",
"Until not so long ago, markup systems have always involved special typographical symbols in the text-brackets, underdots, and so on.",
"Some epigraphers see XML as a natural transformation of what they have always done, with all the additional benefits that come from standardization within the community.",
"There is a growing consensus that XML is the best way to encode text.",
"Some corpora may also use the typographical marks of the Leiden system, which has the advan-<glyph xml:id=\"n5\"> <glyphName> Number 5 </glyphName> <mapping type=\"standardized\"> 5 </mapping> </glyph> tage of being entirely familiar to the epigraphers who create and maintain the corpus.",
"Unfortunately, the special brackets, underdots, and other typographical devices may not be supported by the character set of the computer system to be used.",
"A key incentive for using XML is the ability to exchange data with other projects.",
"It is convenient to be able to divide the information in many layers: cataloging, annotating, commenting and editing the inscriptions.",
"In some cases, merging different layers from different projects could be a need (for example when each of these projects is focused on a specific layer, for which provides the best quality), as a consequence the resulting data should be in compatible forms.",
"If the projects use the same Document Type Definition (DTD), in the same way, this is relatively easy.",
"While corpora that store their texts as wordprocessor files with Leiden markup can also share data, they must agree explicitly on the details of text layout, file formats, and character encodings.",
"With XML, it is possible to define either elements or entities for unsupported characters.",
"This feature is particularly interesting in our case, giving a solution for the numbers representation (Linear A numbers, except for fractions, have no Unicode representation).",
"Suppose you want to mark up the sign group , conventionally interpreted as the number 5, in the XML.",
"As specified in the TEI DTD, this could be expressed as <g ref=\"#n5\"/>, where the element g indicates a glyph, or a non-standard character and the attribute value points to the element glyph, which contains information about the specific glyph.",
"An example is given in Figure 1 .",
"Alternatively, the project might define an entity to represent this character.",
"Either way, the XML text notes that there is the Linear A number 5, and the later rendering of the text for display or printing can substitute the appropriate character in a known font, a picture of the character, or even a numeral from a different system.",
"Such approaches assume that tools are available for these conversions; some application, transformation, or stylesheet must have a way to know how to interpret the given element or entity.",
"The usage of XML provides two advantages: in first place, it makes possible the encoding of the characters that occur in the text (as shown above); in second place, it's really useful for encoding meta-information.",
"Why EpiDoc?",
"If a project decides to use XML, the most appropriate DTD (or schema) to be used needs to be chosen.",
"As in every other humanities discipline, the basic question is whether to use a general DTD, like the TEI, or to write a project-specific one.",
"Some projects need DTDs that are extremely specific to the types of inscriptions they are dealing with, instead other projects prefer to rely on existing, widely used DTDs.",
"Mahoney (2007) has deeply analyzed all the digitization issues, taking into account all the advantages and disadvantages of different approaches; her conclusion is that it's best to use EpiDoc 9 an XML encoding tool that could be also used to write structured documents compliant with the TEI standard 10 .",
"The EpiDoc DTD is the TEI, with a few epigraphically oriented customizations made using the standard TEI mechanisms.",
"Rather than writing a DTD for epigraphy from scratch, the Epi-Doc group uses the TEI because TEI has already addressed many of the taxonomic and semantic challenges faced by epigraphers, because the TEIusing community can provide a wide range of best-practice examples and guiding expertise, and because existing tooling built around TEI could easily lead to early and effective presentation and use of TEI-encoded epigraphic texts (Mahoney, 2007) .",
"The TEI and EpiDoc approaches have already been adopted by several epigraphic projects (Bodard, 2009 ), such as the Dêmos project (Furman University) and the corpus of Macedonian and Thracian inscriptions being compiled at KERA, the Research Center for Greek and Roman Antiquity at Athens (Mahoney, 2007) .",
"Also other scholars evaluate EpiDoc as a suitable choice.",
"Felle (2011) compares the EAGLE (Electronic Archive of Greek and Latin Epigraphy 11 ) project with the EpiDoc existing resources, viewing these resources as different but complementary.",
"Álvarez et al.",
"(2010) and Gómez Pantoja and Álvarez (2011) discuss the possibility of sharing Epigraphic Information as EpiDoc-based Linked Data and describe how they implemented a relational-to-linked data solution for the Hispania Epigraphica database.",
"Cayless (2003) evaluates EpiDoc as a relevant digital tool for Epigraphy allowing for a uniform representation of epigraphic metadata.",
"The EpiDoc guidelines are emerging as one standard for digital epigraphy with the TEI.",
"EpiDoc is not the only possible way to use the TEI for epigraphic texts but the tools, documentation, and examples 12 make it a good environment for new digitization projects as ours.",
"EpiDoc structure An EpiDoc document is structured as a standard TEI document with the teiHeader element including some initial Desc sections (fileDesc, encodingDesc, profileDesc, revisionDesc, etc) containing metadata, general information and descriptions (here we annotated place, period, kind of support and specific objects/fragments IDs).",
"An interesting use of encodingDesc is shown in Figure 1 above: the gliph element has to be defined inside its parent element charDecl and its grandparent element encodingDesc.",
"The teiHeader element is followed by the text element including the body element composed by a series of unnumbered <div>s, distinguished by their type attributes (we show an example of the Epidoc <div> element in Figure 2 ).",
"Typical divisions might include: • text itself (type=\"edition\"); • translation (type=\"translation\"); 11 http://www.eagle-eagle.it/ 12 http://wiki.tei-c.org/index.php/ Samples_of_TEI_texts • description (type=\"description\"; • commentary (type=\"commentary\"); • historical information(type=\"history\"); • bibliography (type=\"bibliography\").",
"<div lang=\"minoan\" n=\"text\" type=\"edition\" part=\"N\" sample=\"complete\" org=\"uniform\"> <head lang=\"eng\">Edition</head> <cb rend=\"front\" n=\"HM 1673\"/> <ab part=\"N\"> <lb n=\"1\"/> <w part=\"N\"> </w> <space dim=\"horizontal\" extent=\"1em\" unit=\"character\"/> <w part=\"N\"> </w> <lb n=\"2\"/> <w part=\"N\"> </w> <g ref=\"#n5\"/> <w part=\"N\"> </w> <lb n=\"3\"/> <w part=\"N\"> </w> <g ref=\"#n12\"/> <w part=\"N\"> </w> <lb n=\"4\"/> <w part=\"N\"> </w> <g ref=\"#n6\"/> <lb n=\"5\"/> <w part=\"N\"> </w> <lb n=\"6\"/> <g ref=\"#n4\"/> <w part=\"N\"> </w> <supplied reason=\"damage\"> </supplied> <gap extent=\"2em\" reason=\"lost\" unit=\"character\" dim=\"right\"/> </ab> </div> The EpiDoc DTD introduces a finite set of possible values for the type of a <div>, so that there is a clear distinction between sections covering different aspects, such as the commentary, the description or the archaeological history.",
"One advantage of structured markup is that editors can encode more information about how certain a particular feature is.",
"The date of an inscription, for example, can be encoded as a range of possible dates.",
"EpiDoc includes the TEI <certainty> element and the cert attribute to encourage editors to say whether or not they are completely confident of a given reading.",
"After some discussion, the EpiDoc community (Mahoney, 2007) decided that certainty should be expressed as a yes-or-no value: either the editor is certain of the reading, or not.",
"Gradual certainty is too complicated to manage and is best explained in the commentary.",
"Developing the Linear A Corpus The hope that computational approaches could help decipher Linear A, along with the evident lack of rich digital resources in this field, led us to develop this new resource.",
"In this section we describe which issues we faced and which solving strategies we used.",
"Data Collection Luckily the existence of Younger's website and GORILA volumes, together with the Raison-Pope Index, made possible a semi-automatic collection process, starting from syllabic transcriptions taken from Younger's website (with his permission), converting them in Unicode strings through Python scripts and acquiring all the metadata provided in Younger's transcriptions (location and support IDs, conservation place, periodization etc.).",
"Younger's resources on his website consist of two HTML pages, one containing inscriptions from Haghia Triada (that is the richest location in terms of documents found there) (Younger, 2000k) and the other containing documents from all the other locations (Younger, 2000l ).",
"Younger's transcriptions are well enriched with metadata.",
"The metadata convey the same information found in GORILA, including the Raison-Pope Index, plus some additional description of the support (this was not necessary in GORILA volumes, where the transcriptions are shown just next to the documents pictures) and the reference to the specific GORILA volume and pages.",
"Segmentation Issues When working on ancient writing systems, segmentation issues are expected to come up.",
"John G. Younger explains (Younger, 2000c ) that in Linear A separation is mainly indicated in two ways: first, by associating sign groups with numbers or logograms, thereby implying a separation; second, by placing a dot between two sign groups, thereby explicitly separating the sign groups or between a sign group and some other sign like a transaction sign or a logogram.",
"Younger also explains that in texts that employ a string of sign groups, dots are used to separate them and this practice is most notable on non-bureaucratic texts and especially in religious texts.",
"On his website, Younger also covers the hyphenization issue (Younger, 2000d) , explaining that in some cases we find a split across lines and the reason may involve separating prefixes from base words (the root of a sign group) or base words from their suffixes.",
"As Younger points out, this hypothesis would require evidence showing that affixes are involved.",
"The hyphenization issue is more complex to solve because a 'neutral' resource should avoid transcriptions implying a well known segmentation for Linear A sign groups.",
"In Younger's transcriptions, split sign groups are reunified in order to make it clearer when a known sign group is there.",
"Instead, our digital collection keeps the text as it is on the document, all the information about interpretations of such kind can be stored separately.",
"Obtaining Unicode transcriptions We managed to obtain Unicode encoded transcriptions by automatically converting Younger's phonetic transcriptions to GORILA transcriptions (manually checked against GORILA volumes) and then by automatically converting GORILA transcriptions to Unicode codes and printing them as Unicode characters (UTF-8 encoding).",
"In order to create the syllables-to-GORILA and the GORILA-to-Unicode dictionaries, we took into account Younger's conversion table mentioned in Subsection 2.4 and the official Unicode documentation (containing explicit Unicode-to-GORILA mapping information).",
"All these processing steps have been implemented through Python scripts.",
"XML annotation Once collected the whole corpus encoded in Unicode, we automatically added part of the XML annotation through a python script.",
"These documents have been later manually corrected and completed, checking against GORILA volumes.",
"A new Linear A font Before the Unicode 7.0 release, there was no way to visualize Unicode characters in the range 10600-1077F.",
"Even now, systems that are not updated may have trouble to visualize those characters.",
"Some implementations for Unicode support in certain contexts (for example for L A T E X's output) are not always up-to-date, so it is not obvious that the fonts for the most recent characters sets are available.",
"We decided to develop a new Linear A font, solving the main issue found in LA.ttf (wrong Unicode positions).",
"Starting from the official Unicode documentation, we created a set of symbols graphically similar to the official ones and aligned them to the right Unicode positions.",
"We decided to name the font John_Younger.ttf to show our appreciation for Younger's work.",
"He made the results of GORILA available to a wider public on digital media; this is the same goal we want to pursue by developing and distributing this font.",
"We released the font file at the following URL: http://openfontlibrary.",
"org/en/font/john-younger.",
"The Linear A Digital Corpus as cultural resource As stated by European Commission (2015) and UNESCO (2003) , the meaning of the notion of cultural heritage does not apply just to material objects and works of art, but also to 'intangible cultural heritage', as traditions and creative expressions.",
"In this perspective, linguistic corpora fit perfectly this definition; in fact, they contain information about tradition, knowledge and lifestyle of a certain culture.",
"Despite the fact that the Minoan language has not yet been deciphered, we know that the Linear A corpus provides interesting information concerning economy, commerce and religion.",
"As mentioned in Subsection 2.1, Schoep (2002) made a critical assessment of the Linear A tablets and their role in the administrative process, studying the physical supports.",
"Ruth Palmer (1995) made a deep study of commodities distributions (listing precise quantities and places) among Minoan centers, even without a full understanding of documents contents.",
"As Palmer points out, 'the ideograms for basic commodities, and the formats of the Linear A texts are similar enough to their Linear B counterparts to allow valid comparison of the types and amounts of commodities which appear in specific contexts'.",
"So, it's possible to have 'an idea of the focus of the economy' and of 'the scale and complexity of the transactions'.",
"From the linear A tablets, we can infer information about the resources management and administration system of Minoan centers.",
"Van den Kerkhof and Rem (2007) analyzed the Minoan libation formulas: religious inscriptions on cups, ladles and tables that were used in the offerings of oil and other powerful drinks at dawn.",
"The priestesses that carried out the Minoan libation ritual used all kinds of utensils, and they often inscribed their sacred formulas onto these objects.",
"Around thirty of these texts have survived (whole or in part) on libation tables, ladles and vases, written in various kinds of handwriting.",
"Transcripts of these religious inscriptions are available from Consani et al.",
"(1999) and from John G. Younger (2000m) on his website.",
"As noticed by Duhoux (1989) the Minoan libation formulas have a fixed structure with variable elements.",
"In fact, some studies (Davis, 2014) about Minoan syntax have been made by observing the sign groups order found in these regular formulas.",
"More importantly, the presence of olive-like ideograms could tell us that the Minoans used olive oil for libation (Van den Kerkhof and Rem, 2007) .",
"Beyond all these parts of the Minoan cultural heritage already available, a huge part is preserved there too: the Minoan language, with its hidden stories reflecting the life of a civilization.",
"We hope that our contribution can be useful to the community and that the Minoan, in its digital form, may finally be deciphered through computational approaches.",
"Future Work We are working on XSL style sheets in order to create suitable HTML pages.",
"All the data will be freely available and published at the following URL: http://ling.ied.edu.HK/ gregoire/lineara.",
"A further step will be developing a web interface to annotate, and dynamically enrich the corpus information."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"2.3",
"2.4",
"2.5",
"2.6",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"3.5",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"4.5",
"5",
"6"
],
"paper_header_content": [
"Introduction to Linear A and Minoan",
"Linear A available resources",
"Linear A documents",
"Godart and Olivier's Collection of Linear A Inscriptions",
"John G. Younger's website",
"GORILA symbols catalogue",
"Linear A Font",
"Unicode Linear A Characters Set",
"Corpus data format",
"Why Digital?",
"Why Unicode?",
"Why XML?",
"Why EpiDoc?",
"EpiDoc structure",
"Developing the Linear A Corpus",
"Data Collection",
"Segmentation Issues",
"Obtaining Unicode transcriptions",
"XML annotation",
"A new Linear A font",
"The Linear A Digital Corpus as cultural resource",
"Future Work"
]
} | GEM-SciDuet-train-29#paper-1038#slide-11 | Corpus size | GORILA: 1,427 Linear A documents
John G. Youngers website: 1,077 Linear A transcriptions (75.5% of the total)
Our corpus will contain up to 1,077 Linear A XML TEI EpiDoc documents
The Unicode conversions of John G. Youngers transcriptions have been converted in XML in an automatic way but the tagging has been only partially carried out
The main remaing work (still in progress) is manually checking the data with the GORILA volumes
Petrolito, Winterstein, Perono Cacciafoco Linear A Corpus 30 July 2015 | GORILA: 1,427 Linear A documents
John G. Youngers website: 1,077 Linear A transcriptions (75.5% of the total)
Our corpus will contain up to 1,077 Linear A XML TEI EpiDoc documents
The Unicode conversions of John G. Youngers transcriptions have been converted in XML in an automatic way but the tagging has been only partially carried out
The main remaing work (still in progress) is manually checking the data with the GORILA volumes
Petrolito, Winterstein, Perono Cacciafoco Linear A Corpus 30 July 2015 | [] |
GEM-SciDuet-train-29#paper-1038#slide-12 | 1038 | Minoan linguistic resources: The Linear A Digital Corpus | This paper describes the Linear A/Minoan digital corpus and the approaches we applied to develop it. We aim to set up a suitable study resource for Linear A and Minoan. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168
],
"paper_content_text": [
"Firstly we start by introducing Linear A and Minoan in order to make it clear why we should develop a digital marked up corpus of the existing Linear A transcriptions.",
"Secondly we list and describe some of the existing resources about Linear A: Linear A documents (seals, statuettes, vessels etc.",
"), the traditional encoding systems (standard code numbers referring to distinct symbols), a Linear A font, and the newest (released on June 16th 2014) Unicode Standard Characters set for Linear A. Thirdly we explain our choice concerning the data format: why we decided to digitize the Linear A resources; why we decided to convert all the transcriptions in standard Unicode characters; why we decided to use an XML format; why we decided to implement the TEI-EpiDoc DTD.",
"Lastly we describe: the developing process (from the data collection to the issues we faced and the solving strategies); a new font we developed (synchronized with the Unicode Characters Set) in order to make the data readable even on systems that are not updated.",
"Finally, we discuss the corpus we developed in a Cultural Heritage preservation perspective and suggest some future works.",
"Introduction to Linear A and Minoan Linear A is the script used by the Minoan Civilization (Cotterell, 1980) from 2500 to 1450 BC.",
"Writing system Time span Cretan Hieroglyphic 2100 -1700 BC Linear A 2500 -1450 BC Linear B 1450 -1200 BC The Minoan Civilization arose on the island of Crete in the Aegean Sea during the Bronze Age.",
"Minoan ruins and artifacts have been found mainly in Crete but also in other Greek islands and in mainland Greece, in Bulgaria, in Turkey and in Israel.",
"Linear A is not used anymore and, even after decades of studies (it was discovered by Sir Arthur Evans around 1900 (Evans, 1909) ), it still remains undeciphered.",
"All the assumptions and hypotheses made about Linear A and Minoan (its underlying language) are mainly based on the comparison with the well known Linear B, the famous child system originated by Linear A.",
"In fact, Linear B was fully deciphered during the 1950s by Michael Ventris 1 and was found to encode an ancient Greek dialect used by the Mycenaean civilization.",
"Archaeologist Arthur Evans named the script 'Linear' because it consisted just of lines inscribed in clay (Robinson, 2009) There have been several attempts to decipher Linear A and the Minoan Language.",
"We can divide the underlying hypotheses in six groups: Greek-like language (Nagy, 1963) , distinct Indo-European branch (Owens, 1999) , Anatolian language close to Luwian (Palmer, 1958) , archaic form of Phoenician (Dietrich and Loretz, 2001) , Indo-Iranian (Faure, 1998) and Etruscan-like language (Giulio M. Facchetti and Negri, 2003) .",
"There is also an interesting attempt (Younger, 2000b) to decipher single words, specifically toponyms, by applying Linear B phonetic values to the symbols shared by both Linear A and Linear B and following the assumption that toponyms are much more likely to survive as loans in Mycenaean Greek (written in Linear B); we show an example of this approach in Table 2.",
"In the next sections we describe the available existing resources concerning Linear A and the Linear A Digital Corpus: why and how we developed it.",
"Linear A available resources Even if Linear A and Linear B were discovered more than one century ago, Linear A has not been deciphered yet.",
"Nevertheless, many scholars worked on collecting and organizing all the available data in order to study and to decipher the script and the language.",
"Probably due to the fact that only historical linguists, philologists and archaeologists attempted to collect and organize all the existing data, nowadays a rich and well organized digital corpus is still not available.",
"In this section we describe all the available Linear A resources, including both physical documents and digital data.",
"Table 3 : Indexed types of support (Younger, 2000e) .",
"Linear A documents Linear A was written on a variety of media, such as stone offering tables, gold and silver hair pins, and pots (inked and inscribed).",
"The clay documents consist of tablets, roundels, and sealings (one-hole, two-hole, and flat-based).",
"Roundels are related to a \"conveyance of a commodity, either within the central administration or between the central administration and an external party\" (Palmer, 1995; Schoep, 2002) .",
"The roundel is the record of this transaction that stays within the central administration as the commodity moves out of the transacting bureau (Hallager, 1996) .",
"Two-hole sealings probably dangled from commodities brought into the center; onehole sealings apparently dangled from papyrus/parchment documents; flat-based sealings (themselves never inscribed) were pressed against the twine that secured papyrus/parchment documents (Younger, 2000g; Schoep, 2002) as shown by photographs (Müller, 1999) , (Müller, 2002) of the imprints that survive on the underside of flat-based sealings.",
"There are 1,427 Linear A documents containing 7,362-7,396 signs, much less than the quantity of data we have for Linear B (more than 4,600 documents containing 57,398 signs) (Younger, 2000f) .",
"Godart and Olivier's Collection of Linear A Inscriptions There is a complete and organized collection of Linear A documents on a paper corpus, the GORILA Louis Godart and Jean-Pierre Olivier, Recueil des inscriptions en Linéaire A (Godart and Olivier, 1976) .",
"Godart and Olivier have indexed the documents by original location and type of support, following the Raison-Pope Index (Raison and Pope, 1971) .",
"For example, the document AP Za 1 is from AP = Apodoulou and the support type is Za = stone vessels as shown in Table 3 .",
"Younger (2000h) provides a map with all the Cretan sites and one with all the Greek non-Cretan sites (Younger, 2000i) .",
"Godart and Olivier also provide referential data about conservation places (mainly museums), and periodization (for example: EM II = Second Early Minoan).",
"Since 1976, this has been the main source of data and point of reference about Linear A documents and it has set up the basis for further studies.",
"Even recent corpora, such as the Corpus transnuméré du linéaire A (Raison and Pope, 1994) , always refer to GORILA precise volumes and pages describing each document.",
"John G. Younger's website Younger (2000j) has published a website that is the best digital resource available (there is another interesting project, never completed, on Yannis Deliyannis's website 2 ).",
"It collects most of the existing inscriptions (taking GORILA as main source of data and point of reference) transcribed as Linear B phonetic values (like the KU-NI-SU transcription above).",
"The transcriptions are kept up to date and a complete restructuring in June 2015 has been announced (Younger, 2000j) .",
"GORILA symbols catalogue Many transcription systems have been defined.",
"The first one has been proposed by Raison and Pope (1971) and uses a string composed by one or two characters (Lm, L or Lc depending on the symbol, respectively metric, phonetic or compound) followed by a number, for example: L2.",
"This system has been widely used by many scholars such as David Woodley Packard (president of the Packard Humanities Institute 3 ), Colin Renfrew and Richard Janko (Packard, 1974; Renfrew, 1977; Janko, 1982) .",
"The second one, used in the GORILA collection (Godart and Olivier, 1976 ) and on John G. Younger's website, consists of a string composed by one or two characters (AB if the symbol is shared by Linear A and Linear B, A if the symbol is only used in Linear A) followed by a number and eventually other alphabetical characters (due to addenda and corrigenda to earlier versions), for example: AB03.",
"Many scholars transcribe the symbols shared by Linear A and B with the assumed phonetical/syllabic transcription.",
"This syllabic transcription is based on the corresponding Linear B phonetic values.",
"Younger (2000a) provides a conversion table of Pope and Raison's transcription system, GO-RILA's transcription system and his own phonetic/syllabic transcription system.",
"Developing our corpus, we worked mainly on Younger's syllabic and GORILA transcriptions, because the Unicode Linear A encoding is broadly based on the GORILA catalogue, which is also the basic set of characters used in decipherment efforts 4 .",
"We provide an example of different transcriptions for the same symbol in Table 4 .",
"As can be noticed, the Unicode encoding is based on the GORILA transcription system.",
"Linear A Font The best Linear A Font available is LA.ttf, released by D.W. Borgdorff 5 in 2004.",
"In this font some arbitrary Unicode positions for Latin characters are mapped to Linear A symbols.",
"On one hand this allows the user to type Linear A symbols directly by pressing the keys on the keyboard; on the other hand, only transliterations can be produced.",
"The text eventually typed internally will be a series of Latin characters.",
"It should be remarked that this font would not be useful to make readable a Linear A corpus that is non-translittered and encoded in Unicode.",
"Unicode Linear A Characters Set On June 16th 2014, Version 7.0 of Unicode standard was released 6 , adding 2,834 new characters and including, finally, the Linear A character set.",
"Linear A block has been set in the range 10600-1077F and the order mainly follows GORILA's one 7 , as seen in Table 4 .",
"This Unicode Set covers simple signs, vase shapes, complex signs, complex signs with vase shapes, fractions and compound fractions.",
"This is a resource that opens, for the first time, the possibility to develop a Linear A digital corpus not consisting of a transliteration or alternative transcription.",
"Corpus data format Many scholars have faced the issues for data curation and considered various possibilities.",
"Among all the possible solutions, we chose to develop the Linear A Digital Corpus as a collection of TEI-EpiDoc XML documents.",
"In this section we explain why.",
"Why Digital?",
"Many epigraphic corpora have begun to be digitalized; there are many reasons to do so.",
"A digital corpus can include several representations of the inscriptions (Mahoney, 2007) : • pictures of the original document; • pictures of drawings or transcriptions made by hand simplifying the document; • diplomatic transcriptions; • edited texts; • translations; • commentaries.",
"Building a database is enough to get much richer features than the ones a paper corpus would provide.",
"The most visible feature of an epigraphic database is its utility as an Index Universalis (Gómez Pantoja and Álvarez, 2011); unlike hand-made indexes, there is no need to constrain the number of available search-keys.",
"Needless to say, the opportunity to have the data available also on the web is valuable.",
"Why Unicode?",
"Text processing must also take into account the writing systems represented in the corpus.",
"If the corpus consists of inscriptions written in the Latin alphabet, then the writing system of the inscriptions is the same as that of the Western European modern languages used for meta-data, translations, and commentaries.",
"In our case, unluckily, we have to deal with Linear A, so we need to find a way to represent our text.",
"Scholars objected to epigraphic databases on the ground of its poor graphic ability to represent non-Latin writing systems (García Barriocanal et al., 2011) .",
"This led to the use of non-standard fonts in some databases which probed to be a bad move, compromising overall compatibility and system upgrading.",
"This approach is appealing because if the corpus needs to be printed, sooner or later fonts will be a need in all cases.",
"The font-based solution assumes that all the software involved can recognize font-change markers.",
"Unluckily, some Database Management Systems (DMSs) do not allow changes of font within a text field and some export or interchange formats lose font information.",
"When the scripts of the corpus are all supported, which will be the case for any script still used by a living language, Unicode is a better approach.",
"Despite Minoan not being a living language, Linear A is finally part of the Unicode 7.0 Character Code Charts 8 but some sign groups conventionally interpreted as numbers have no Unicode representation.",
"Why XML?",
"Until not so long ago, markup systems have always involved special typographical symbols in the text-brackets, underdots, and so on.",
"Some epigraphers see XML as a natural transformation of what they have always done, with all the additional benefits that come from standardization within the community.",
"There is a growing consensus that XML is the best way to encode text.",
"Some corpora may also use the typographical marks of the Leiden system, which has the advan-<glyph xml:id=\"n5\"> <glyphName> Number 5 </glyphName> <mapping type=\"standardized\"> 5 </mapping> </glyph> tage of being entirely familiar to the epigraphers who create and maintain the corpus.",
"Unfortunately, the special brackets, underdots, and other typographical devices may not be supported by the character set of the computer system to be used.",
"A key incentive for using XML is the ability to exchange data with other projects.",
"It is convenient to be able to divide the information in many layers: cataloging, annotating, commenting and editing the inscriptions.",
"In some cases, merging different layers from different projects could be a need (for example when each of these projects is focused on a specific layer, for which provides the best quality), as a consequence the resulting data should be in compatible forms.",
"If the projects use the same Document Type Definition (DTD), in the same way, this is relatively easy.",
"While corpora that store their texts as wordprocessor files with Leiden markup can also share data, they must agree explicitly on the details of text layout, file formats, and character encodings.",
"With XML, it is possible to define either elements or entities for unsupported characters.",
"This feature is particularly interesting in our case, giving a solution for the numbers representation (Linear A numbers, except for fractions, have no Unicode representation).",
"Suppose you want to mark up the sign group , conventionally interpreted as the number 5, in the XML.",
"As specified in the TEI DTD, this could be expressed as <g ref=\"#n5\"/>, where the element g indicates a glyph, or a non-standard character and the attribute value points to the element glyph, which contains information about the specific glyph.",
"An example is given in Figure 1 .",
"Alternatively, the project might define an entity to represent this character.",
"Either way, the XML text notes that there is the Linear A number 5, and the later rendering of the text for display or printing can substitute the appropriate character in a known font, a picture of the character, or even a numeral from a different system.",
"Such approaches assume that tools are available for these conversions; some application, transformation, or stylesheet must have a way to know how to interpret the given element or entity.",
"The usage of XML provides two advantages: in first place, it makes possible the encoding of the characters that occur in the text (as shown above); in second place, it's really useful for encoding meta-information.",
"Why EpiDoc?",
"If a project decides to use XML, the most appropriate DTD (or schema) to be used needs to be chosen.",
"As in every other humanities discipline, the basic question is whether to use a general DTD, like the TEI, or to write a project-specific one.",
"Some projects need DTDs that are extremely specific to the types of inscriptions they are dealing with, instead other projects prefer to rely on existing, widely used DTDs.",
"Mahoney (2007) has deeply analyzed all the digitization issues, taking into account all the advantages and disadvantages of different approaches; her conclusion is that it's best to use EpiDoc 9 an XML encoding tool that could be also used to write structured documents compliant with the TEI standard 10 .",
"The EpiDoc DTD is the TEI, with a few epigraphically oriented customizations made using the standard TEI mechanisms.",
"Rather than writing a DTD for epigraphy from scratch, the Epi-Doc group uses the TEI because TEI has already addressed many of the taxonomic and semantic challenges faced by epigraphers, because the TEIusing community can provide a wide range of best-practice examples and guiding expertise, and because existing tooling built around TEI could easily lead to early and effective presentation and use of TEI-encoded epigraphic texts (Mahoney, 2007) .",
"The TEI and EpiDoc approaches have already been adopted by several epigraphic projects (Bodard, 2009 ), such as the Dêmos project (Furman University) and the corpus of Macedonian and Thracian inscriptions being compiled at KERA, the Research Center for Greek and Roman Antiquity at Athens (Mahoney, 2007) .",
"Also other scholars evaluate EpiDoc as a suitable choice.",
"Felle (2011) compares the EAGLE (Electronic Archive of Greek and Latin Epigraphy 11 ) project with the EpiDoc existing resources, viewing these resources as different but complementary.",
"Álvarez et al.",
"(2010) and Gómez Pantoja and Álvarez (2011) discuss the possibility of sharing Epigraphic Information as EpiDoc-based Linked Data and describe how they implemented a relational-to-linked data solution for the Hispania Epigraphica database.",
"Cayless (2003) evaluates EpiDoc as a relevant digital tool for Epigraphy allowing for a uniform representation of epigraphic metadata.",
"The EpiDoc guidelines are emerging as one standard for digital epigraphy with the TEI.",
"EpiDoc is not the only possible way to use the TEI for epigraphic texts but the tools, documentation, and examples 12 make it a good environment for new digitization projects as ours.",
"EpiDoc structure An EpiDoc document is structured as a standard TEI document with the teiHeader element including some initial Desc sections (fileDesc, encodingDesc, profileDesc, revisionDesc, etc) containing metadata, general information and descriptions (here we annotated place, period, kind of support and specific objects/fragments IDs).",
"An interesting use of encodingDesc is shown in Figure 1 above: the gliph element has to be defined inside its parent element charDecl and its grandparent element encodingDesc.",
"The teiHeader element is followed by the text element including the body element composed by a series of unnumbered <div>s, distinguished by their type attributes (we show an example of the Epidoc <div> element in Figure 2 ).",
"Typical divisions might include: • text itself (type=\"edition\"); • translation (type=\"translation\"); 11 http://www.eagle-eagle.it/ 12 http://wiki.tei-c.org/index.php/ Samples_of_TEI_texts • description (type=\"description\"; • commentary (type=\"commentary\"); • historical information(type=\"history\"); • bibliography (type=\"bibliography\").",
"<div lang=\"minoan\" n=\"text\" type=\"edition\" part=\"N\" sample=\"complete\" org=\"uniform\"> <head lang=\"eng\">Edition</head> <cb rend=\"front\" n=\"HM 1673\"/> <ab part=\"N\"> <lb n=\"1\"/> <w part=\"N\"> </w> <space dim=\"horizontal\" extent=\"1em\" unit=\"character\"/> <w part=\"N\"> </w> <lb n=\"2\"/> <w part=\"N\"> </w> <g ref=\"#n5\"/> <w part=\"N\"> </w> <lb n=\"3\"/> <w part=\"N\"> </w> <g ref=\"#n12\"/> <w part=\"N\"> </w> <lb n=\"4\"/> <w part=\"N\"> </w> <g ref=\"#n6\"/> <lb n=\"5\"/> <w part=\"N\"> </w> <lb n=\"6\"/> <g ref=\"#n4\"/> <w part=\"N\"> </w> <supplied reason=\"damage\"> </supplied> <gap extent=\"2em\" reason=\"lost\" unit=\"character\" dim=\"right\"/> </ab> </div> The EpiDoc DTD introduces a finite set of possible values for the type of a <div>, so that there is a clear distinction between sections covering different aspects, such as the commentary, the description or the archaeological history.",
"One advantage of structured markup is that editors can encode more information about how certain a particular feature is.",
"The date of an inscription, for example, can be encoded as a range of possible dates.",
"EpiDoc includes the TEI <certainty> element and the cert attribute to encourage editors to say whether or not they are completely confident of a given reading.",
"After some discussion, the EpiDoc community (Mahoney, 2007) decided that certainty should be expressed as a yes-or-no value: either the editor is certain of the reading, or not.",
"Gradual certainty is too complicated to manage and is best explained in the commentary.",
"Developing the Linear A Corpus The hope that computational approaches could help decipher Linear A, along with the evident lack of rich digital resources in this field, led us to develop this new resource.",
"In this section we describe which issues we faced and which solving strategies we used.",
"Data Collection Luckily the existence of Younger's website and GORILA volumes, together with the Raison-Pope Index, made possible a semi-automatic collection process, starting from syllabic transcriptions taken from Younger's website (with his permission), converting them in Unicode strings through Python scripts and acquiring all the metadata provided in Younger's transcriptions (location and support IDs, conservation place, periodization etc.).",
"Younger's resources on his website consist of two HTML pages, one containing inscriptions from Haghia Triada (that is the richest location in terms of documents found there) (Younger, 2000k) and the other containing documents from all the other locations (Younger, 2000l ).",
"Younger's transcriptions are well enriched with metadata.",
"The metadata convey the same information found in GORILA, including the Raison-Pope Index, plus some additional description of the support (this was not necessary in GORILA volumes, where the transcriptions are shown just next to the documents pictures) and the reference to the specific GORILA volume and pages.",
"Segmentation Issues When working on ancient writing systems, segmentation issues are expected to come up.",
"John G. Younger explains (Younger, 2000c ) that in Linear A separation is mainly indicated in two ways: first, by associating sign groups with numbers or logograms, thereby implying a separation; second, by placing a dot between two sign groups, thereby explicitly separating the sign groups or between a sign group and some other sign like a transaction sign or a logogram.",
"Younger also explains that in texts that employ a string of sign groups, dots are used to separate them and this practice is most notable on non-bureaucratic texts and especially in religious texts.",
"On his website, Younger also covers the hyphenization issue (Younger, 2000d) , explaining that in some cases we find a split across lines and the reason may involve separating prefixes from base words (the root of a sign group) or base words from their suffixes.",
"As Younger points out, this hypothesis would require evidence showing that affixes are involved.",
"The hyphenization issue is more complex to solve because a 'neutral' resource should avoid transcriptions implying a well known segmentation for Linear A sign groups.",
"In Younger's transcriptions, split sign groups are reunified in order to make it clearer when a known sign group is there.",
"Instead, our digital collection keeps the text as it is on the document, all the information about interpretations of such kind can be stored separately.",
"Obtaining Unicode transcriptions We managed to obtain Unicode encoded transcriptions by automatically converting Younger's phonetic transcriptions to GORILA transcriptions (manually checked against GORILA volumes) and then by automatically converting GORILA transcriptions to Unicode codes and printing them as Unicode characters (UTF-8 encoding).",
"In order to create the syllables-to-GORILA and the GORILA-to-Unicode dictionaries, we took into account Younger's conversion table mentioned in Subsection 2.4 and the official Unicode documentation (containing explicit Unicode-to-GORILA mapping information).",
"All these processing steps have been implemented through Python scripts.",
"XML annotation Once collected the whole corpus encoded in Unicode, we automatically added part of the XML annotation through a python script.",
"These documents have been later manually corrected and completed, checking against GORILA volumes.",
"A new Linear A font Before the Unicode 7.0 release, there was no way to visualize Unicode characters in the range 10600-1077F.",
"Even now, systems that are not updated may have trouble to visualize those characters.",
"Some implementations for Unicode support in certain contexts (for example for L A T E X's output) are not always up-to-date, so it is not obvious that the fonts for the most recent characters sets are available.",
"We decided to develop a new Linear A font, solving the main issue found in LA.ttf (wrong Unicode positions).",
"Starting from the official Unicode documentation, we created a set of symbols graphically similar to the official ones and aligned them to the right Unicode positions.",
"We decided to name the font John_Younger.ttf to show our appreciation for Younger's work.",
"He made the results of GORILA available to a wider public on digital media; this is the same goal we want to pursue by developing and distributing this font.",
"We released the font file at the following URL: http://openfontlibrary.",
"org/en/font/john-younger.",
"The Linear A Digital Corpus as cultural resource As stated by European Commission (2015) and UNESCO (2003) , the meaning of the notion of cultural heritage does not apply just to material objects and works of art, but also to 'intangible cultural heritage', as traditions and creative expressions.",
"In this perspective, linguistic corpora fit perfectly this definition; in fact, they contain information about tradition, knowledge and lifestyle of a certain culture.",
"Despite the fact that the Minoan language has not yet been deciphered, we know that the Linear A corpus provides interesting information concerning economy, commerce and religion.",
"As mentioned in Subsection 2.1, Schoep (2002) made a critical assessment of the Linear A tablets and their role in the administrative process, studying the physical supports.",
"Ruth Palmer (1995) made a deep study of commodities distributions (listing precise quantities and places) among Minoan centers, even without a full understanding of documents contents.",
"As Palmer points out, 'the ideograms for basic commodities, and the formats of the Linear A texts are similar enough to their Linear B counterparts to allow valid comparison of the types and amounts of commodities which appear in specific contexts'.",
"So, it's possible to have 'an idea of the focus of the economy' and of 'the scale and complexity of the transactions'.",
"From the linear A tablets, we can infer information about the resources management and administration system of Minoan centers.",
"Van den Kerkhof and Rem (2007) analyzed the Minoan libation formulas: religious inscriptions on cups, ladles and tables that were used in the offerings of oil and other powerful drinks at dawn.",
"The priestesses that carried out the Minoan libation ritual used all kinds of utensils, and they often inscribed their sacred formulas onto these objects.",
"Around thirty of these texts have survived (whole or in part) on libation tables, ladles and vases, written in various kinds of handwriting.",
"Transcripts of these religious inscriptions are available from Consani et al.",
"(1999) and from John G. Younger (2000m) on his website.",
"As noticed by Duhoux (1989) the Minoan libation formulas have a fixed structure with variable elements.",
"In fact, some studies (Davis, 2014) about Minoan syntax have been made by observing the sign groups order found in these regular formulas.",
"More importantly, the presence of olive-like ideograms could tell us that the Minoans used olive oil for libation (Van den Kerkhof and Rem, 2007) .",
"Beyond all these parts of the Minoan cultural heritage already available, a huge part is preserved there too: the Minoan language, with its hidden stories reflecting the life of a civilization.",
"We hope that our contribution can be useful to the community and that the Minoan, in its digital form, may finally be deciphered through computational approaches.",
"Future Work We are working on XSL style sheets in order to create suitable HTML pages.",
"All the data will be freely available and published at the following URL: http://ling.ied.edu.HK/ gregoire/lineara.",
"A further step will be developing a web interface to annotate, and dynamically enrich the corpus information."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"2.3",
"2.4",
"2.5",
"2.6",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"3.5",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"4.5",
"5",
"6"
],
"paper_header_content": [
"Introduction to Linear A and Minoan",
"Linear A available resources",
"Linear A documents",
"Godart and Olivier's Collection of Linear A Inscriptions",
"John G. Younger's website",
"GORILA symbols catalogue",
"Linear A Font",
"Unicode Linear A Characters Set",
"Corpus data format",
"Why Digital?",
"Why Unicode?",
"Why XML?",
"Why EpiDoc?",
"EpiDoc structure",
"Developing the Linear A Corpus",
"Data Collection",
"Segmentation Issues",
"Obtaining Unicode transcriptions",
"XML annotation",
"A new Linear A font",
"The Linear A Digital Corpus as cultural resource",
"Future Work"
]
} | GEM-SciDuet-train-29#paper-1038#slide-12 | John Younger ttf | Before the release of Unicode 7.0, there was no way to visualize
The traditional Linear A font, LA.ttf, included wrong Unicode positions
We developed a new Linear A font, named after John Younger to show our appreciation for his work: John_Younger.ttf (available at
Petrolito, Winterstein, Perono Cacciafoco Linear A Corpus 30 July 2015 | Before the release of Unicode 7.0, there was no way to visualize
The traditional Linear A font, LA.ttf, included wrong Unicode positions
We developed a new Linear A font, named after John Younger to show our appreciation for his work: John_Younger.ttf (available at
Petrolito, Winterstein, Perono Cacciafoco Linear A Corpus 30 July 2015 | [] |
GEM-SciDuet-train-29#paper-1038#slide-13 | 1038 | Minoan linguistic resources: The Linear A Digital Corpus | This paper describes the Linear A/Minoan digital corpus and the approaches we applied to develop it. We aim to set up a suitable study resource for Linear A and Minoan. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168
],
"paper_content_text": [
"Firstly we start by introducing Linear A and Minoan in order to make it clear why we should develop a digital marked up corpus of the existing Linear A transcriptions.",
"Secondly we list and describe some of the existing resources about Linear A: Linear A documents (seals, statuettes, vessels etc.",
"), the traditional encoding systems (standard code numbers referring to distinct symbols), a Linear A font, and the newest (released on June 16th 2014) Unicode Standard Characters set for Linear A. Thirdly we explain our choice concerning the data format: why we decided to digitize the Linear A resources; why we decided to convert all the transcriptions in standard Unicode characters; why we decided to use an XML format; why we decided to implement the TEI-EpiDoc DTD.",
"Lastly we describe: the developing process (from the data collection to the issues we faced and the solving strategies); a new font we developed (synchronized with the Unicode Characters Set) in order to make the data readable even on systems that are not updated.",
"Finally, we discuss the corpus we developed in a Cultural Heritage preservation perspective and suggest some future works.",
"Introduction to Linear A and Minoan Linear A is the script used by the Minoan Civilization (Cotterell, 1980) from 2500 to 1450 BC.",
"Writing system Time span Cretan Hieroglyphic 2100 -1700 BC Linear A 2500 -1450 BC Linear B 1450 -1200 BC The Minoan Civilization arose on the island of Crete in the Aegean Sea during the Bronze Age.",
"Minoan ruins and artifacts have been found mainly in Crete but also in other Greek islands and in mainland Greece, in Bulgaria, in Turkey and in Israel.",
"Linear A is not used anymore and, even after decades of studies (it was discovered by Sir Arthur Evans around 1900 (Evans, 1909) ), it still remains undeciphered.",
"All the assumptions and hypotheses made about Linear A and Minoan (its underlying language) are mainly based on the comparison with the well known Linear B, the famous child system originated by Linear A.",
"In fact, Linear B was fully deciphered during the 1950s by Michael Ventris 1 and was found to encode an ancient Greek dialect used by the Mycenaean civilization.",
"Archaeologist Arthur Evans named the script 'Linear' because it consisted just of lines inscribed in clay (Robinson, 2009) There have been several attempts to decipher Linear A and the Minoan Language.",
"We can divide the underlying hypotheses in six groups: Greek-like language (Nagy, 1963) , distinct Indo-European branch (Owens, 1999) , Anatolian language close to Luwian (Palmer, 1958) , archaic form of Phoenician (Dietrich and Loretz, 2001) , Indo-Iranian (Faure, 1998) and Etruscan-like language (Giulio M. Facchetti and Negri, 2003) .",
"There is also an interesting attempt (Younger, 2000b) to decipher single words, specifically toponyms, by applying Linear B phonetic values to the symbols shared by both Linear A and Linear B and following the assumption that toponyms are much more likely to survive as loans in Mycenaean Greek (written in Linear B); we show an example of this approach in Table 2.",
"In the next sections we describe the available existing resources concerning Linear A and the Linear A Digital Corpus: why and how we developed it.",
"Linear A available resources Even if Linear A and Linear B were discovered more than one century ago, Linear A has not been deciphered yet.",
"Nevertheless, many scholars worked on collecting and organizing all the available data in order to study and to decipher the script and the language.",
"Probably due to the fact that only historical linguists, philologists and archaeologists attempted to collect and organize all the existing data, nowadays a rich and well organized digital corpus is still not available.",
"In this section we describe all the available Linear A resources, including both physical documents and digital data.",
"Table 3 : Indexed types of support (Younger, 2000e) .",
"Linear A documents Linear A was written on a variety of media, such as stone offering tables, gold and silver hair pins, and pots (inked and inscribed).",
"The clay documents consist of tablets, roundels, and sealings (one-hole, two-hole, and flat-based).",
"Roundels are related to a \"conveyance of a commodity, either within the central administration or between the central administration and an external party\" (Palmer, 1995; Schoep, 2002) .",
"The roundel is the record of this transaction that stays within the central administration as the commodity moves out of the transacting bureau (Hallager, 1996) .",
"Two-hole sealings probably dangled from commodities brought into the center; onehole sealings apparently dangled from papyrus/parchment documents; flat-based sealings (themselves never inscribed) were pressed against the twine that secured papyrus/parchment documents (Younger, 2000g; Schoep, 2002) as shown by photographs (Müller, 1999) , (Müller, 2002) of the imprints that survive on the underside of flat-based sealings.",
"There are 1,427 Linear A documents containing 7,362-7,396 signs, much less than the quantity of data we have for Linear B (more than 4,600 documents containing 57,398 signs) (Younger, 2000f) .",
"Godart and Olivier's Collection of Linear A Inscriptions There is a complete and organized collection of Linear A documents on a paper corpus, the GORILA Louis Godart and Jean-Pierre Olivier, Recueil des inscriptions en Linéaire A (Godart and Olivier, 1976) .",
"Godart and Olivier have indexed the documents by original location and type of support, following the Raison-Pope Index (Raison and Pope, 1971) .",
"For example, the document AP Za 1 is from AP = Apodoulou and the support type is Za = stone vessels as shown in Table 3 .",
"Younger (2000h) provides a map with all the Cretan sites and one with all the Greek non-Cretan sites (Younger, 2000i) .",
"Godart and Olivier also provide referential data about conservation places (mainly museums), and periodization (for example: EM II = Second Early Minoan).",
"Since 1976, this has been the main source of data and point of reference about Linear A documents and it has set up the basis for further studies.",
"Even recent corpora, such as the Corpus transnuméré du linéaire A (Raison and Pope, 1994) , always refer to GORILA precise volumes and pages describing each document.",
"John G. Younger's website Younger (2000j) has published a website that is the best digital resource available (there is another interesting project, never completed, on Yannis Deliyannis's website 2 ).",
"It collects most of the existing inscriptions (taking GORILA as main source of data and point of reference) transcribed as Linear B phonetic values (like the KU-NI-SU transcription above).",
"The transcriptions are kept up to date and a complete restructuring in June 2015 has been announced (Younger, 2000j) .",
"GORILA symbols catalogue Many transcription systems have been defined.",
"The first one has been proposed by Raison and Pope (1971) and uses a string composed by one or two characters (Lm, L or Lc depending on the symbol, respectively metric, phonetic or compound) followed by a number, for example: L2.",
"This system has been widely used by many scholars such as David Woodley Packard (president of the Packard Humanities Institute 3 ), Colin Renfrew and Richard Janko (Packard, 1974; Renfrew, 1977; Janko, 1982) .",
"The second one, used in the GORILA collection (Godart and Olivier, 1976 ) and on John G. Younger's website, consists of a string composed by one or two characters (AB if the symbol is shared by Linear A and Linear B, A if the symbol is only used in Linear A) followed by a number and eventually other alphabetical characters (due to addenda and corrigenda to earlier versions), for example: AB03.",
"Many scholars transcribe the symbols shared by Linear A and B with the assumed phonetical/syllabic transcription.",
"This syllabic transcription is based on the corresponding Linear B phonetic values.",
"Younger (2000a) provides a conversion table of Pope and Raison's transcription system, GO-RILA's transcription system and his own phonetic/syllabic transcription system.",
"Developing our corpus, we worked mainly on Younger's syllabic and GORILA transcriptions, because the Unicode Linear A encoding is broadly based on the GORILA catalogue, which is also the basic set of characters used in decipherment efforts 4 .",
"We provide an example of different transcriptions for the same symbol in Table 4 .",
"As can be noticed, the Unicode encoding is based on the GORILA transcription system.",
"Linear A Font The best Linear A Font available is LA.ttf, released by D.W. Borgdorff 5 in 2004.",
"In this font some arbitrary Unicode positions for Latin characters are mapped to Linear A symbols.",
"On one hand this allows the user to type Linear A symbols directly by pressing the keys on the keyboard; on the other hand, only transliterations can be produced.",
"The text eventually typed internally will be a series of Latin characters.",
"It should be remarked that this font would not be useful to make readable a Linear A corpus that is non-translittered and encoded in Unicode.",
"Unicode Linear A Characters Set On June 16th 2014, Version 7.0 of Unicode standard was released 6 , adding 2,834 new characters and including, finally, the Linear A character set.",
"Linear A block has been set in the range 10600-1077F and the order mainly follows GORILA's one 7 , as seen in Table 4 .",
"This Unicode Set covers simple signs, vase shapes, complex signs, complex signs with vase shapes, fractions and compound fractions.",
"This is a resource that opens, for the first time, the possibility to develop a Linear A digital corpus not consisting of a transliteration or alternative transcription.",
"Corpus data format Many scholars have faced the issues for data curation and considered various possibilities.",
"Among all the possible solutions, we chose to develop the Linear A Digital Corpus as a collection of TEI-EpiDoc XML documents.",
"In this section we explain why.",
"Why Digital?",
"Many epigraphic corpora have begun to be digitalized; there are many reasons to do so.",
"A digital corpus can include several representations of the inscriptions (Mahoney, 2007) : • pictures of the original document; • pictures of drawings or transcriptions made by hand simplifying the document; • diplomatic transcriptions; • edited texts; • translations; • commentaries.",
"Building a database is enough to get much richer features than the ones a paper corpus would provide.",
"The most visible feature of an epigraphic database is its utility as an Index Universalis (Gómez Pantoja and Álvarez, 2011); unlike hand-made indexes, there is no need to constrain the number of available search-keys.",
"Needless to say, the opportunity to have the data available also on the web is valuable.",
"Why Unicode?",
"Text processing must also take into account the writing systems represented in the corpus.",
"If the corpus consists of inscriptions written in the Latin alphabet, then the writing system of the inscriptions is the same as that of the Western European modern languages used for meta-data, translations, and commentaries.",
"In our case, unluckily, we have to deal with Linear A, so we need to find a way to represent our text.",
"Scholars objected to epigraphic databases on the ground of its poor graphic ability to represent non-Latin writing systems (García Barriocanal et al., 2011) .",
"This led to the use of non-standard fonts in some databases which probed to be a bad move, compromising overall compatibility and system upgrading.",
"This approach is appealing because if the corpus needs to be printed, sooner or later fonts will be a need in all cases.",
"The font-based solution assumes that all the software involved can recognize font-change markers.",
"Unluckily, some Database Management Systems (DMSs) do not allow changes of font within a text field and some export or interchange formats lose font information.",
"When the scripts of the corpus are all supported, which will be the case for any script still used by a living language, Unicode is a better approach.",
"Despite Minoan not being a living language, Linear A is finally part of the Unicode 7.0 Character Code Charts 8 but some sign groups conventionally interpreted as numbers have no Unicode representation.",
"Why XML?",
"Until not so long ago, markup systems have always involved special typographical symbols in the text-brackets, underdots, and so on.",
"Some epigraphers see XML as a natural transformation of what they have always done, with all the additional benefits that come from standardization within the community.",
"There is a growing consensus that XML is the best way to encode text.",
"Some corpora may also use the typographical marks of the Leiden system, which has the advan-<glyph xml:id=\"n5\"> <glyphName> Number 5 </glyphName> <mapping type=\"standardized\"> 5 </mapping> </glyph> tage of being entirely familiar to the epigraphers who create and maintain the corpus.",
"Unfortunately, the special brackets, underdots, and other typographical devices may not be supported by the character set of the computer system to be used.",
"A key incentive for using XML is the ability to exchange data with other projects.",
"It is convenient to be able to divide the information in many layers: cataloging, annotating, commenting and editing the inscriptions.",
"In some cases, merging different layers from different projects could be a need (for example when each of these projects is focused on a specific layer, for which provides the best quality), as a consequence the resulting data should be in compatible forms.",
"If the projects use the same Document Type Definition (DTD), in the same way, this is relatively easy.",
"While corpora that store their texts as wordprocessor files with Leiden markup can also share data, they must agree explicitly on the details of text layout, file formats, and character encodings.",
"With XML, it is possible to define either elements or entities for unsupported characters.",
"This feature is particularly interesting in our case, giving a solution for the numbers representation (Linear A numbers, except for fractions, have no Unicode representation).",
"Suppose you want to mark up the sign group , conventionally interpreted as the number 5, in the XML.",
"As specified in the TEI DTD, this could be expressed as <g ref=\"#n5\"/>, where the element g indicates a glyph, or a non-standard character and the attribute value points to the element glyph, which contains information about the specific glyph.",
"An example is given in Figure 1 .",
"Alternatively, the project might define an entity to represent this character.",
"Either way, the XML text notes that there is the Linear A number 5, and the later rendering of the text for display or printing can substitute the appropriate character in a known font, a picture of the character, or even a numeral from a different system.",
"Such approaches assume that tools are available for these conversions; some application, transformation, or stylesheet must have a way to know how to interpret the given element or entity.",
"The usage of XML provides two advantages: in first place, it makes possible the encoding of the characters that occur in the text (as shown above); in second place, it's really useful for encoding meta-information.",
"Why EpiDoc?",
"If a project decides to use XML, the most appropriate DTD (or schema) to be used needs to be chosen.",
"As in every other humanities discipline, the basic question is whether to use a general DTD, like the TEI, or to write a project-specific one.",
"Some projects need DTDs that are extremely specific to the types of inscriptions they are dealing with, instead other projects prefer to rely on existing, widely used DTDs.",
"Mahoney (2007) has deeply analyzed all the digitization issues, taking into account all the advantages and disadvantages of different approaches; her conclusion is that it's best to use EpiDoc 9 an XML encoding tool that could be also used to write structured documents compliant with the TEI standard 10 .",
"The EpiDoc DTD is the TEI, with a few epigraphically oriented customizations made using the standard TEI mechanisms.",
"Rather than writing a DTD for epigraphy from scratch, the Epi-Doc group uses the TEI because TEI has already addressed many of the taxonomic and semantic challenges faced by epigraphers, because the TEIusing community can provide a wide range of best-practice examples and guiding expertise, and because existing tooling built around TEI could easily lead to early and effective presentation and use of TEI-encoded epigraphic texts (Mahoney, 2007) .",
"The TEI and EpiDoc approaches have already been adopted by several epigraphic projects (Bodard, 2009 ), such as the Dêmos project (Furman University) and the corpus of Macedonian and Thracian inscriptions being compiled at KERA, the Research Center for Greek and Roman Antiquity at Athens (Mahoney, 2007) .",
"Also other scholars evaluate EpiDoc as a suitable choice.",
"Felle (2011) compares the EAGLE (Electronic Archive of Greek and Latin Epigraphy 11 ) project with the EpiDoc existing resources, viewing these resources as different but complementary.",
"Álvarez et al.",
"(2010) and Gómez Pantoja and Álvarez (2011) discuss the possibility of sharing Epigraphic Information as EpiDoc-based Linked Data and describe how they implemented a relational-to-linked data solution for the Hispania Epigraphica database.",
"Cayless (2003) evaluates EpiDoc as a relevant digital tool for Epigraphy allowing for a uniform representation of epigraphic metadata.",
"The EpiDoc guidelines are emerging as one standard for digital epigraphy with the TEI.",
"EpiDoc is not the only possible way to use the TEI for epigraphic texts but the tools, documentation, and examples 12 make it a good environment for new digitization projects as ours.",
"EpiDoc structure An EpiDoc document is structured as a standard TEI document with the teiHeader element including some initial Desc sections (fileDesc, encodingDesc, profileDesc, revisionDesc, etc) containing metadata, general information and descriptions (here we annotated place, period, kind of support and specific objects/fragments IDs).",
"An interesting use of encodingDesc is shown in Figure 1 above: the gliph element has to be defined inside its parent element charDecl and its grandparent element encodingDesc.",
"The teiHeader element is followed by the text element including the body element composed by a series of unnumbered <div>s, distinguished by their type attributes (we show an example of the Epidoc <div> element in Figure 2 ).",
"Typical divisions might include: • text itself (type=\"edition\"); • translation (type=\"translation\"); 11 http://www.eagle-eagle.it/ 12 http://wiki.tei-c.org/index.php/ Samples_of_TEI_texts • description (type=\"description\"; • commentary (type=\"commentary\"); • historical information(type=\"history\"); • bibliography (type=\"bibliography\").",
"<div lang=\"minoan\" n=\"text\" type=\"edition\" part=\"N\" sample=\"complete\" org=\"uniform\"> <head lang=\"eng\">Edition</head> <cb rend=\"front\" n=\"HM 1673\"/> <ab part=\"N\"> <lb n=\"1\"/> <w part=\"N\"> </w> <space dim=\"horizontal\" extent=\"1em\" unit=\"character\"/> <w part=\"N\"> </w> <lb n=\"2\"/> <w part=\"N\"> </w> <g ref=\"#n5\"/> <w part=\"N\"> </w> <lb n=\"3\"/> <w part=\"N\"> </w> <g ref=\"#n12\"/> <w part=\"N\"> </w> <lb n=\"4\"/> <w part=\"N\"> </w> <g ref=\"#n6\"/> <lb n=\"5\"/> <w part=\"N\"> </w> <lb n=\"6\"/> <g ref=\"#n4\"/> <w part=\"N\"> </w> <supplied reason=\"damage\"> </supplied> <gap extent=\"2em\" reason=\"lost\" unit=\"character\" dim=\"right\"/> </ab> </div> The EpiDoc DTD introduces a finite set of possible values for the type of a <div>, so that there is a clear distinction between sections covering different aspects, such as the commentary, the description or the archaeological history.",
"One advantage of structured markup is that editors can encode more information about how certain a particular feature is.",
"The date of an inscription, for example, can be encoded as a range of possible dates.",
"EpiDoc includes the TEI <certainty> element and the cert attribute to encourage editors to say whether or not they are completely confident of a given reading.",
"After some discussion, the EpiDoc community (Mahoney, 2007) decided that certainty should be expressed as a yes-or-no value: either the editor is certain of the reading, or not.",
"Gradual certainty is too complicated to manage and is best explained in the commentary.",
"Developing the Linear A Corpus The hope that computational approaches could help decipher Linear A, along with the evident lack of rich digital resources in this field, led us to develop this new resource.",
"In this section we describe which issues we faced and which solving strategies we used.",
"Data Collection Luckily the existence of Younger's website and GORILA volumes, together with the Raison-Pope Index, made possible a semi-automatic collection process, starting from syllabic transcriptions taken from Younger's website (with his permission), converting them in Unicode strings through Python scripts and acquiring all the metadata provided in Younger's transcriptions (location and support IDs, conservation place, periodization etc.).",
"Younger's resources on his website consist of two HTML pages, one containing inscriptions from Haghia Triada (that is the richest location in terms of documents found there) (Younger, 2000k) and the other containing documents from all the other locations (Younger, 2000l ).",
"Younger's transcriptions are well enriched with metadata.",
"The metadata convey the same information found in GORILA, including the Raison-Pope Index, plus some additional description of the support (this was not necessary in GORILA volumes, where the transcriptions are shown just next to the documents pictures) and the reference to the specific GORILA volume and pages.",
"Segmentation Issues When working on ancient writing systems, segmentation issues are expected to come up.",
"John G. Younger explains (Younger, 2000c ) that in Linear A separation is mainly indicated in two ways: first, by associating sign groups with numbers or logograms, thereby implying a separation; second, by placing a dot between two sign groups, thereby explicitly separating the sign groups or between a sign group and some other sign like a transaction sign or a logogram.",
"Younger also explains that in texts that employ a string of sign groups, dots are used to separate them and this practice is most notable on non-bureaucratic texts and especially in religious texts.",
"On his website, Younger also covers the hyphenization issue (Younger, 2000d) , explaining that in some cases we find a split across lines and the reason may involve separating prefixes from base words (the root of a sign group) or base words from their suffixes.",
"As Younger points out, this hypothesis would require evidence showing that affixes are involved.",
"The hyphenization issue is more complex to solve because a 'neutral' resource should avoid transcriptions implying a well known segmentation for Linear A sign groups.",
"In Younger's transcriptions, split sign groups are reunified in order to make it clearer when a known sign group is there.",
"Instead, our digital collection keeps the text as it is on the document, all the information about interpretations of such kind can be stored separately.",
"Obtaining Unicode transcriptions We managed to obtain Unicode encoded transcriptions by automatically converting Younger's phonetic transcriptions to GORILA transcriptions (manually checked against GORILA volumes) and then by automatically converting GORILA transcriptions to Unicode codes and printing them as Unicode characters (UTF-8 encoding).",
"In order to create the syllables-to-GORILA and the GORILA-to-Unicode dictionaries, we took into account Younger's conversion table mentioned in Subsection 2.4 and the official Unicode documentation (containing explicit Unicode-to-GORILA mapping information).",
"All these processing steps have been implemented through Python scripts.",
"XML annotation Once collected the whole corpus encoded in Unicode, we automatically added part of the XML annotation through a python script.",
"These documents have been later manually corrected and completed, checking against GORILA volumes.",
"A new Linear A font Before the Unicode 7.0 release, there was no way to visualize Unicode characters in the range 10600-1077F.",
"Even now, systems that are not updated may have trouble to visualize those characters.",
"Some implementations for Unicode support in certain contexts (for example for L A T E X's output) are not always up-to-date, so it is not obvious that the fonts for the most recent characters sets are available.",
"We decided to develop a new Linear A font, solving the main issue found in LA.ttf (wrong Unicode positions).",
"Starting from the official Unicode documentation, we created a set of symbols graphically similar to the official ones and aligned them to the right Unicode positions.",
"We decided to name the font John_Younger.ttf to show our appreciation for Younger's work.",
"He made the results of GORILA available to a wider public on digital media; this is the same goal we want to pursue by developing and distributing this font.",
"We released the font file at the following URL: http://openfontlibrary.",
"org/en/font/john-younger.",
"The Linear A Digital Corpus as cultural resource As stated by European Commission (2015) and UNESCO (2003) , the meaning of the notion of cultural heritage does not apply just to material objects and works of art, but also to 'intangible cultural heritage', as traditions and creative expressions.",
"In this perspective, linguistic corpora fit perfectly this definition; in fact, they contain information about tradition, knowledge and lifestyle of a certain culture.",
"Despite the fact that the Minoan language has not yet been deciphered, we know that the Linear A corpus provides interesting information concerning economy, commerce and religion.",
"As mentioned in Subsection 2.1, Schoep (2002) made a critical assessment of the Linear A tablets and their role in the administrative process, studying the physical supports.",
"Ruth Palmer (1995) made a deep study of commodities distributions (listing precise quantities and places) among Minoan centers, even without a full understanding of documents contents.",
"As Palmer points out, 'the ideograms for basic commodities, and the formats of the Linear A texts are similar enough to their Linear B counterparts to allow valid comparison of the types and amounts of commodities which appear in specific contexts'.",
"So, it's possible to have 'an idea of the focus of the economy' and of 'the scale and complexity of the transactions'.",
"From the linear A tablets, we can infer information about the resources management and administration system of Minoan centers.",
"Van den Kerkhof and Rem (2007) analyzed the Minoan libation formulas: religious inscriptions on cups, ladles and tables that were used in the offerings of oil and other powerful drinks at dawn.",
"The priestesses that carried out the Minoan libation ritual used all kinds of utensils, and they often inscribed their sacred formulas onto these objects.",
"Around thirty of these texts have survived (whole or in part) on libation tables, ladles and vases, written in various kinds of handwriting.",
"Transcripts of these religious inscriptions are available from Consani et al.",
"(1999) and from John G. Younger (2000m) on his website.",
"As noticed by Duhoux (1989) the Minoan libation formulas have a fixed structure with variable elements.",
"In fact, some studies (Davis, 2014) about Minoan syntax have been made by observing the sign groups order found in these regular formulas.",
"More importantly, the presence of olive-like ideograms could tell us that the Minoans used olive oil for libation (Van den Kerkhof and Rem, 2007) .",
"Beyond all these parts of the Minoan cultural heritage already available, a huge part is preserved there too: the Minoan language, with its hidden stories reflecting the life of a civilization.",
"We hope that our contribution can be useful to the community and that the Minoan, in its digital form, may finally be deciphered through computational approaches.",
"Future Work We are working on XSL style sheets in order to create suitable HTML pages.",
"All the data will be freely available and published at the following URL: http://ling.ied.edu.HK/ gregoire/lineara.",
"A further step will be developing a web interface to annotate, and dynamically enrich the corpus information."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"2.3",
"2.4",
"2.5",
"2.6",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"3.5",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"4.5",
"5",
"6"
],
"paper_header_content": [
"Introduction to Linear A and Minoan",
"Linear A available resources",
"Linear A documents",
"Godart and Olivier's Collection of Linear A Inscriptions",
"John G. Younger's website",
"GORILA symbols catalogue",
"Linear A Font",
"Unicode Linear A Characters Set",
"Corpus data format",
"Why Digital?",
"Why Unicode?",
"Why XML?",
"Why EpiDoc?",
"EpiDoc structure",
"Developing the Linear A Corpus",
"Data Collection",
"Segmentation Issues",
"Obtaining Unicode transcriptions",
"XML annotation",
"A new Linear A font",
"The Linear A Digital Corpus as cultural resource",
"Future Work"
]
} | GEM-SciDuet-train-29#paper-1038#slide-13 | From Linear A to Minoan culture | The Linear A corpus is an important cultural monument, storing information about tradition, knowledge and lifestyle of Minoan people
Even without a full understanding of transcriptions some cultural features can be inferred
Economics and commerce: as some ideograms for basic commodities are similar to their Linear B counterparts, we can compare types and amounts of commodities
Religion: there are around thirty libation formulas transcribed on various supports
Petrolito, Winterstein, Perono Cacciafoco Linear A Corpus 30 July 2015 | The Linear A corpus is an important cultural monument, storing information about tradition, knowledge and lifestyle of Minoan people
Even without a full understanding of transcriptions some cultural features can be inferred
Economics and commerce: as some ideograms for basic commodities are similar to their Linear B counterparts, we can compare types and amounts of commodities
Religion: there are around thirty libation formulas transcribed on various supports
Petrolito, Winterstein, Perono Cacciafoco Linear A Corpus 30 July 2015 | [] |
GEM-SciDuet-train-30#paper-1041#slide-0 | 1041 | LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modeling Structure Makes Them Better | Language exhibits hierarchical structure, but recent work using a subject-verb agreement diagnostic argued that state-ofthe-art language models, LSTMs, fail to learn long-range syntax-sensitive dependencies. Using the same diagnostic, we show that, in fact, LSTMs do succeed in learning such dependencies-provided they have enough capacity. We then explore whether models that have access to explicit syntactic information learn agreement more effectively, and how the way in which this structural information is incorporated into the model impacts performance. We find that the mere presence of syntactic information does not improve accuracy, but when model architecture is determined by syntax, number agreement is improved. Further, we find that the choice of how syntactic structure is built affects how well number agreement is learned: top-down construction outperforms leftcorner and bottom-up variants in capturing long-distance structural dependencies. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189
],
"paper_content_text": [
"Introduction Recurrent neural networks (RNNs) are remarkably effective models of sequential data.",
"Recent years have witnessed the widespread adoption of recurrent architectures such as LSTMs (Hochreiter and Schmidhuber, 1997) in various NLP tasks, with state of the art results in language modeling and conditional generation tasks like machine translation (Bahdanau et al., 2015) and text summarization (See et al., 2017) .",
"Here we revisit the question asked by Linzen et al.",
"(2016) : as RNNs model word sequences without explicit notions of hierarchical structure, Figure 1 : An example of the number agreement task with two attractors and a subject-verb distance of five.",
"to what extent are these models able to learn non-local syntactic dependencies in natural language?",
"Identifying number agreement between subjects and verbs-especially in the presence of attractors-can be understood as a cognitivelymotivated probe that seeks to distinguish hierarchical theories from sequential ones, as models that rely on sequential cues like the most recent noun would favor the incorrect verb form.",
"We provide an example of this task in Fig.",
"1 , where the plural form of the verb have agrees with the distant subject parts, rather than the adjacent attractors (underlined) of the singular form.",
"Contrary to the findings of Linzen et al.",
"(2016) , our experiments suggest that sequential LSTMs are able to capture structural dependencies to a large extent, even for cases with multiple attractors ( §2).",
"Our finding suggests that network capacity plays a crucial role in capturing structural dependencies with multiple attractors.",
"Nevertheless, we find that a strong character LSTM language model-which lacks explicit word representation and has to capture much longer sequential dependencies in order to learn non-local structural dependencies effectively-performs much worse in the number agreement task.",
"Given the strong performance of word-based LSTM language models, are there are any substantial benefits, in terms of number agreement accuracy, to explicitly modeling hierarchical structures as an inductive bias?",
"We discover that a certain class of LSTM language models that explicitly models syntactic structures, the recurrent neural network grammars (Dyer et al., 2016, RN-NGs) , considerably outperforms sequential LSTM language models for cases with multiple attractors ( §3).",
"We present experiments affirming that this gain is due to an explicit composition operator rather than the presence of predicted syntactic annotations.",
"Rather surprisingly, syntactic LSTM language models without explicit composition have no advantage over sequential LSTMs that operate on word sequences, although these models can nevertheless be excellent predictors of phrase structures (Choe and Charniak, 2016) .",
"Having established the importance of modeling structures, we explore the hypothesis that how we build the structure affects the model's ability to identify structural dependencies in English.",
"As RNNGs build phrase-structure trees through top-down operations, we propose extensions to the structure-building sequences and model architecture that enable left-corner (Henderson, 2003 (Henderson, , 2004 and bottom-up (Chelba and Jelinek, 2000; Emami and Jelinek, 2005) generation orders ( §4).",
"Extensive prior work has characterized topdown, left-corner, and bottom-up parsing strategies in terms of cognitive plausibility (Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992) and neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we move away from the realm of parsing and evaluate the three strategies as models of generation instead, and address the following empirical question: which generation order is most appropriately biased to model structural dependencies in English, as indicated by number agreement accuracy?",
"Our key finding is that the top-down generation outperforms left-corner and bottom-up variants for difficult cases with multiple attractors.",
"In theory, the three traversal strategies approximate the same chain rule that decompose the joint probability of words and phrase-structure trees, denoted as p(x, y), differently and as such will impose different biases on the learner.",
"In §4.3, we show that the three variants achieve similar perplexities on a held-out validation set.",
"As we observe different patterns in number agreement, this demonstrates that while perplexity can be a useful diagnostic tool, it may not be sensitive enough for comparing models in terms of how well they capture grammatical intuitions.",
"Number Agreement with LSTM Language Models We revisit the number agreement task with LSTMs trained on language modeling objectives, as proposed by Linzen et al.",
"(2016) .",
"Experimental Settings.",
"We use the same parsed Wikipedia corpus, verb inflectors, preprocessing steps, and dataset split as Linzen et al.",
"(2016) .",
"1 Word types beyond the most frequent 10,000 are converted to their respective POS tags.",
"We summarize the corpus statistics of the dataset, along with the test set distribution of the number of attractors, in Table 1 .",
"Similar to Linzen et al.",
"(2016) , we only include test cases where all intervening nouns are of the opposite number forms than the subject noun.",
"All models are implemented using the DyNet library (Neubig et al., 2017) .",
"Training was done using a language modeling objective that predicts the next word given the prefix; at test time we compute agreement error rates by comparing the probability of the correct verb form with the incorrect one.",
"We report performance of a few different LSTM hidden layer configurations, while other hyper-parameters are selected based on a grid search.",
"2 Following Linzen Table 2 indicates that, given enough capacity, LSTM language models without explicit syntactic supervision are able to perform well in number agreement.",
"For cases with multiple attractors, we observe that the LSTM language model with 50 hidden units trails behind its larger counterparts by a substantial margin despite comparable performance for zero attractor cases, suggesting that network capacity plays an especially important role in propagating relevant structural information across a large number of steps.",
"5 Our experiment independently derives the same finding as the recent work of Gulordava et al.",
"(2018) , who also find that LSTMs trained with language modeling objectives are able to learn number agreement well; here we additionally identify model capacity as one of the reasons for the discrepancy with the Linzen et al.",
"(2016) results.",
"While the pretrained large-scale language model of Jozefowicz et al.",
"(2016) has certain advantages in terms of model capacity, more training data, and richer vocabulary, we suspect that the poorer performance is due to differences between their training domain and the number agreement testing domain, although the model still performs reasonably well in the number agreement test set.",
"Prior work has confirmed the notion that, in many cases, statistical models are able to achieve good performance under some aggregate metric by overfitting to patterns that are predictive in most cases, often at the expense of more difficult, infrequent instances that require deeper language understanding abilities (Rimell et al., 2009; Jia and Liang, 2017) .",
"In the vast majority of cases, structural dependencies between subjects and verbs highly overlap with sequential dependencies (Table 1).",
"Nevertheless, the fact that number agreement accuracy gets worse as the number of attractors increases is consistent with a sequential recency bias in LSTMs: under this conjecture, identifying the correct structural dependency becomes harder when there are more adjacent nouns of different number forms than the true subject.",
"If the sequential recency conjecture is correct, then LSTMs would perform worse when the structural dependency is more distant in the sequences, compared to cases where the structural dependency is more adjacent.",
"We empirically test this conjecture by running a strong character-based LSTM language model of that achieved state of the art results on EnWiki8 from the Hutter Prize dataset (Hutter, 2012) , with 1,800 hidden units and 10 million parameters.",
"The character LSTM is trained, validated, and tested 6 on the same split of the Linzen et al.",
"(2016) number agreement dataset.",
"A priori, we expect that number agreement is harder for character LSTMs for two reasons.",
"First, character LSTMs lack explicit word representaforms much better for cases with multiple attractors.",
"tions, thus succeeding in this task requires identifying structural dependencies between two sequences of character tokens, while word-based LSTMs only need to resolve dependencies between word tokens.",
"Second, by nature of modeling characters, non-local structural dependencies are sequentially further apart than in the wordbased language model.",
"On the other hand, character LSTMs have the ability to exploit and share informative morphological cues, such as the fact that plural nouns in English tend to end with 's'.",
"As demonstrated on the last row of Table 2 , we find that the character LSTM language model performs much worse at number agreement with multiple attractors compared to its word-based counterparts.",
"This finding is consistent with that of Sennrich (2017) , who find that character-level decoders in neural machine translation perform worse than subword models in capturing morphosyntactic agreement.",
"To some extent, our finding demonstrates the limitations that character LSTMs face in learning structure from language modeling objectives, despite earlier evidence that character LSTM language models are able to implicitly acquire a lexicon (Le Godais et al., 2017) .",
"Number Agreement with RNNGs Given the strong performance of sequential LSTMs in number agreement, is there any further benefit to explicitly modeling hierarchical structures?",
"We focus on recurrent neural network grammars (Dyer et al., 2016, RNNGs) , which jointly model the probability of phrase-structure trees and strings, p(x, y), through structurebuilding actions and explicit compositions for representing completed constituents.",
"Our choice of RNNGs is motivated by the findings of Kuncoro et al.",
"(2017) , who find evidence for syntactic headedness in RNNG phrasal representations.",
"Intuitively, the ability to learn heads is beneficial for this task, as the representation for the noun phrase \"The flowers in the vase\" would be similar to the syntactic head flowers rather than vase.",
"In some sense, the composition operator can be understood as injecting a structural recency bias into the model design, as subjects and verbs that are sequentially apart are encouraged to be close together in the RNNGs' representation.",
"Recurrent Neural Network Grammars RNNGs (Dyer et al., 2016) are language models that estimate the joint probability of string terminals and phrase-structure tree nonterminals.",
"Here we use stack-only RNNGs that achieve better perplexity and parsing performance (Kuncoro et al., 2017) .",
"Given the current stack configuration, the objective function of RNNGs is to predict the correct structure-building operation according to a top-down, left-to-right traversal of the phrasestructure tree; a partial traversal for the input sentence \"The flowers in the vase are blooming\" is illustrated in Fig.",
"3(a) .",
"7 The structural inductive bias of RNNGs derives from an explicit composition operator that represents completed constituents; for instance, the constituent (NP The flowers) is represented by a single composite element on the stack, rather than as four separate symbols.",
"During each REDUCE action, the topmost stack elements that belong to the new constituent are popped from the stack and then composed by the composition function; the composed symbol is then pushed back into the stack.",
"The model is trained in an end-to-end manner by minimizing the cross-entropy loss relative to a sample of gold trees.",
"Experiments Here we summarize the experimental settings of running RNNGs on the number agreement dataset and discuss the empirical findings.",
"Experimental settings.",
"We obtain phrasestructure trees for the Linzen et al.",
"(2016) dataset using a publicly available discriminative model 8 trained on the Penn Treebank (Marcus et al., 1993) .",
"At training time, we use these predicted trees to derive action sequences on the training set, and train the RNNG model on these sequences.",
"9 At test time, we compare the probabilities of the correct and incorrect verb forms given the prefix, which now includes both nonterminal and terminal symbols.",
"An example of the stack contents (i.e.",
"the prefix) when predicting the verb is provided in Fig.",
"3(a) .",
"We similarly run a grid search over the same hyper-parameter range as the sequential LSTM and compare the results with the strongest sequential LSTM baseline from §2.",
"Figure 2 : Number agreement error rates for sequential LSTM language models (left), sequential syntactic LSTM language models (Choe and Charniak, 2016, center), and RNNGs (right).",
"Discussion.",
"Fig.",
"2 shows that RNNGs (rightmost) achieve much better number agreement accuracy compared to LSTM language models (leftmost) for difficult cases with four and five attractors, with around 30% error rate reductions, along with a 13% error rate reduction (from 9% to 7.8%) for three attractors.",
"We attribute the slightly worse performance of RNNGs on cases with zero and one attractor to the presence of intervening structure-building actions that separate the subject and the verb, such as a REDUCE (step 6 in Fig.",
"3(a) ) action to complete the noun phrase and at least one action to predict a verb phrase (step 15 in Fig.",
"3(a) ) before the verb itself is introduced, while LSTM language models benefit from shorter dependencies for zero and one attractor cases.",
"The performance gain of RNNGs might arise from two potential causes.",
"First, RNNGs have access to predicted syntactic annotations, while LSTM language models operate solely on word sequences.",
"Second, RNNGs incorporate explicit compositions, which encourage hierarhical representations and potentially the discovery of syntactic (rather than sequential) dependencies.",
"Would LSTMs that have access to syntactic annotations, but without the explicit composition function, benefit from the same performance gain as RNNGs?",
"To answer this question, we run sequential LSTMs over the same phrase-structure trees (Choe and Charniak, 2016) , similarly estimating the joint probability of phrase-structure nonterminals and string terminals but without an explicit composition operator.",
"Taking the example in Fig.",
"3(a) , the sequential syntactic LSTM would have fifteen 10 symbols on the LSTM when predicting the verb, as opposed to three symbols in the case of RNNGs' stack LSTM.",
"In theory, the sequential LSTM over the phrase-structure trees (Choe and Charniak, 2016) may be able to incorporate a similar, albeit implicit, composition process as RNNGs and consequently derive similarly syntactic heads, although there is no inductive bias that explicitly encourages such process.",
"Fig.",
"2 suggests that the sequential syntactic LSTMs (center) perform comparably with sequential LSTMs without syntax for multiple attractor cases, and worse than RNNGs for nearly all attractors; the gap is highest for multiple attractors.",
"This result showcases the importance of an explicit composition operator and hierarchical representations in identifying structural dependencies, as indicated by number agreement accuracy.",
"Our finding is consistent with the recent work of Yogatama et al.",
"(2018) , who find that introducing elements of hierarchical modeling through a stackstructured memory is beneficial for number agreement, outperforming LSTM language models and attention-augmented variants by increasing margins as the number of attractor grows.",
"Further Analysis In order to better interpret the results, we conduct further analysis into the perplexities of each model, followed by a discussion on the effect of incrementality constraints on the RNNG when predicting number agreement.",
"Perplexity.",
"To what extent does the success of RNNGs in the number agreement task with multiple attractors correlate with better performance under the perplexity metric?",
"We answer this question by using an importance sampling marginalization procedure (Dyer et al., 2016) to obtain an estimate of p(x) under both RNNGs and the sequential syntactic LSTM model.",
"Following Dyer et al.",
"(2016) , for each sentence on the validation set we sample 100 candidate trees from a discriminative model 11 as our proposal distribution.",
"As demonstrated in Table 3 , the LSTM language model has the lowest validation set perplexity despite substantially worse performance than RN-NGs in number agreement with multiple attractors, suggesting that lower perplexity is not neces-sarily correlated with number agreement success.",
"Incrementality constraints.",
"As the syntactic prefix was derived from a discriminative model that has access to unprocessed words, one potential concern is that this prefix might violate the incrementality constraints and benefit the RNNG over the LSTM language model.",
"To address this concern, we remark that the empirical evidence from Fig.",
"2 and Table 3 indicates that the LSTM language model without syntactic annotation outperforms the sequential LSTM with syntactic annotation in terms of both perplexity and number agreement throughout nearly all attractor settings, suggesting that the predicted syntactic prefix does not give any unfair advantage to the syntactic models.",
"Furthermore, we run an experiment where the syntactic prefix is instead derived from an incremental beam search procedure of Fried et al.",
"(2017) .",
"12 To this end, we take the highest scoring beam entry at the time that the verb is generated to be the syntactic prefix; this procedure is applied to both the correct and incorrect verb forms.",
"13 We then similarly compare the probabilities of the correct and incorrect verb form given each respective syntactic prefix to obtain number agreement accuracy.",
"Our finding suggests that using the fully incremental tree prefix leads to even better RNNG number agreement performance for four and five attractors, achieving 7.1% and 8.2% error rates, respectively, compared to 9.4% and 12% for the RNNG error rates in Fig.",
"2 .",
"Top-Down, Left-Corner, and Bottom-Up Traversals In this section, we propose two new variants of RNNGs that construct trees using a different con- 12 As the beam search procedure is time-consuming, we randomly sample 500 cases for each attractor and compute the number agreement accuracy on these samples.",
"13 Consequently, the correct and incorrect forms of the sentence might have different partial trees, as the highest scoring beam entries may be different for each alternative.",
"struction order than the top-down, left-to-right order used above.",
"These are a bottom-up construction order ( §4.1) and a left-corner construction order ( §4.2), analogous to the well-known parsing strategies (e.g.",
"Hale, 2014, chapter 3).",
"They differ from these classic strategies insofar as they do not announce the phrase-structural content of an entire branch at the same time, adopting instead a node-by-node enumeration reminescent of Markov Grammars (Charniak, 1997) .",
"This stepby-step arrangement extends to the derived string as well; since all variants generate words from left to right, the models can be compared using number agreement as a diagnostic.",
"14 Here we state our hypothesis on why the build order matters.",
"The three generation strategies represent different chain rule decompositions of the joint probability of strings and phrase-structure trees, thereby imposing different biases on the learner.",
"Earlier work in parsing has characterized the plausibility of top-down, left-corner, and bottom-up strategies as viable candidates of human sentence processing, especially in terms of memory constraints and human difficulties with center embedding constructions (Johnson-Laird, 1983; Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992 , inter alia), along with neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we cast the three strategies as models of language generation (Manning and Carpenter, 1997), and focus on the empirical question: which generation order has the most appropriate bias in modeling non-local structural dependencies in English?",
"These alternative orders organize the learning problem so as to yield intermediate states in generation that condition on different aspects of the grammatical structure.",
"In number agreement, this amounts to making an agreement controller, such as the word flowers in Fig.",
"3 , more or less salient.",
"If it is more salient, the model should be better-able to inflect the main verb in agreement with this controller, without getting distracted by the attractors.",
"The three proposed build orders are compared in Fig.",
"3 , showing the respective configurations (i.e.",
"the prefix) when generating the main verb in a sentence with a single attractor.",
"15 In ad-dition, we show concrete action sequences for a simpler sentence in each section.",
"Bottom-Up Traversal In bottom-up traversals, phrases are recursively constructed and labeled with the nonterminal type once all their daughters have been built, as illustrated in Fig.",
"4 .",
"Bottom-up traversals benefit from shorter stack depths compared to top-down due to the lack of incomplete nonterminals.",
"As the commitment to label the nonterminal type of a phrase is delayed until its constituents are complete, this means that the generation of a child node cannot condition on the label of its parent node.",
"In n-ary branching trees, bottom-up completion of constituents requires a procedure for determining how many of the most recent elements on the stack should be daughters of the node that is being constructed.",
"16 Conceptually, rather than having a single REDUCE operation as we have before, we have a complex REDUCE(X, n) operation that must determine the type of the constituent (i.e., X) as well as the number of daughters (i.e., n).",
"In step 5 of Fig.",
"4 , the newly formed NP constituent only covers the terminal worms, and neither the unattached terminal eats nor the constituent (NP The fox) is part of the new noun phrase.",
"We implement this extent decision using a stick-breaking construction-using the stack LSTM encoding, a single-layer feedforward network, and a logistic output layer-which decides whether the top element on the stack should be the leftmost child of the new constituent (i.e.",
"whether or not the new constituent is complete after popping the current topmost stack element), as illustrated in Fig.",
"5 .",
"If not, the process is then repeated after the topmost stack element is popped.",
"Once the extent of the new nonterminal has been decided, we parameterize the decision of the nonterminal label type; in Fig.",
"5 this is an NP.",
"A second difference to top-down generation is that when a single constituent remains on the stack, the sentence is not necessarily complete (see step 3 of Fig.",
"4 for examples where this happens).",
"We thus introduce an explicit STOP action (step 8, Fig.",
"4) , indicating the tree is complete, which is only assigned non-zero probability when the stack has a during the history of the full generation process vary considerably in the invariances and the kinds of actions they predict.",
"16 This mechanism is not necessary with strictly binary branching trees, since each new nonterminal always consists of the two children at the top of the stack.",
"Left-Corner Traversal Left-corner traversals combine some aspects of top-down and bottom-up processing.",
"As illustrated in Fig.",
"6 , this works by first generating the leftmost terminal of the tree, The (step 0), before proceeding bottom-up to predict its parent NP (step 1) and then top-down to predict the rest of its children (step 2).",
"A REDUCE action similarly calls the composition operator once the phrase is complete (e.g.",
"step 3).",
"The complete constituent (NP The fox) is the leftmost child of its parent node, thus an NT SW(S) action is done next (step 4).",
"The NT SW(X) action is similar to the NT(X) from the top-down generator, in that it introduces an open nonterminal node and must be matched later by a corresponding REDUCE operation, but, in addition, swaps the two topmost elements at the top of the stack.",
"This is necessary because the parent nonterminal node is not built until after its left-most child has been constructed.",
"In step 1 of Fig.",
"6 , with a single element The on the stack, the action NT SW(NP) adds the open nonterminal symbol NP to become the topmost stack element, but after applying the swap operator the stack now contains (NP | The (step 2).",
"Experiments We optimize the hyper-parameters of each RNNG variant using grid searches based on validation set perplexity.",
"Table 4 summarizes average stack depths and perplexities 17 on the Linzen et al.",
"(2016) validation set.",
"We evaluate each of the variants in terms of number agreement accuracy as an evidence of its suitability to model structural dependencies in English, presented in Table 5 .",
"To account for randomness in training, we report the error rate summary statistics of ten different runs.",
"Avg.",
"(±sdev)/min/max n=2 n=3 n=4 LM 5.8(±0.2)/5.5/6.0 9.6(±0.7)/8.8/10.1 14.1(±1.2)/13.0/15.3 TD 5.5(±0.4)/4.9/5.8 7.8(±0.6)/7.4/8.0 8.9(±1.1)/7.9/9.8 LC 5.4(±0.3)/5.2/5.5 8.2(±0.4)/7.9/8.7 9.9(±1.3)/8.8/11.5 BU 5.7(±0.3) 5.5/5.8 8.5(±0.7)/8.0/9.3 9.7(±1.1)/9.0/11.3 Table 5 : Number agreement error rates for topdown (TD), left-corner (LC), and bottom-up (BU) RNNGs, broken down by the number of attractors.",
"LM indicates the best sequential language model baseline ( §2).",
"We report the mean, standard deviation, and minimum/maximum of 10 different random seeds of each model.",
"Discussion.",
"In Table 5 , we focus on empirical results for cases where the structural dependencies matter the most, corresponding to cases with two, three, and four attractors.",
"All three RNNG variants outperform the sequential LSTM language model baseline for these cases.",
"Nevertheless, the top-down variant outperforms both left-corner and bottom-up strategies for difficult cases with three or more attractors, suggesting that the top-down strategy is most appropriately biased to model difficult number agreement dependencies in English.",
"We run an approximate randomization test by stratifying the output and permuting within each stratum (Yeh, 2000) and find that, for four attractors, the performance difference between the top-down RNNG and the other variants is statistically significant at p < 0.05.",
"The success of the top-down traversal in the domain of number-agreement prediction is consistent with a classical view in parsing that argues top-down parsing is the most human-like parsing strategy since it is the most anticipatory.",
"Only anticipatory representations, it is said, could explain the rapid, incremental processing that humans seem to exhibit (Marslen-Wilson, 1973; Tanenhaus et al., 1995) ; this line of thinking similarly motivates Charniak (2010) , among others.",
"While most work in this domain has been concerned with the parsing problem, our findings suggest that anticipatory mechanisms are also beneficial in capturing structural dependencies in language modeling.",
"We note that our results are achieved using models that, in theory, are able to condition on the entire derivation history, while earlier work in sentence processing has focused on cognitive memory considerations, such as the memory-bounded model of Schuler et al.",
"(2010) .",
"Conclusion Given enough capacity, LSTMs trained on language modeling objectives are able to learn syntax-sensitive dependencies, as evidenced by accurate number agreement accuracy with multiple attractors.",
"Despite this strong performance, we discover explicit modeling of structure does improve the model's ability to discover non-local structural dependencies when determining the distribution over subsequent word generation.",
"Recurrent neural network grammars (RNNGs), which jointly model phrase-structure trees and strings and employ an explicit composition operator, substantially outperform LSTM language models and syntactic language models without explicit compositions; this highlights the importance of a hierarchical inductive bias in capturing structural dependencies.",
"We explore the possibility that how the structure is built affects number agreement performance.",
"Through novel extensions to RNNGs that enable the use of left-corner and bottom-up generation strategies, we discover that this is indeed the case: the three RNNG variants have different generalization properties for number agreement, with the top-down traversal strategy performing best for cases with multiple attractors."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"4.3",
"5"
],
"paper_header_content": [
"Introduction",
"Number Agreement with LSTM Language Models",
"Number Agreement with RNNGs",
"Recurrent Neural Network Grammars",
"Experiments",
"Further Analysis",
"Top-Down, Left-Corner, and Bottom-Up Traversals",
"Bottom-Up Traversal",
"Left-Corner Traversal",
"Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-30#paper-1041#slide-0 | Motivation | Language exhibits hierarchical structure
[[The cat [that he adopted]] [sleeps]]
but LSTMs work so well without explicit notions of structure.
LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | Language exhibits hierarchical structure
[[The cat [that he adopted]] [sleeps]]
but LSTMs work so well without explicit notions of structure.
LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | [] |
GEM-SciDuet-train-30#paper-1041#slide-1 | 1041 | LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modeling Structure Makes Them Better | Language exhibits hierarchical structure, but recent work using a subject-verb agreement diagnostic argued that state-ofthe-art language models, LSTMs, fail to learn long-range syntax-sensitive dependencies. Using the same diagnostic, we show that, in fact, LSTMs do succeed in learning such dependencies-provided they have enough capacity. We then explore whether models that have access to explicit syntactic information learn agreement more effectively, and how the way in which this structural information is incorporated into the model impacts performance. We find that the mere presence of syntactic information does not improve accuracy, but when model architecture is determined by syntax, number agreement is improved. Further, we find that the choice of how syntactic structure is built affects how well number agreement is learned: top-down construction outperforms leftcorner and bottom-up variants in capturing long-distance structural dependencies. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189
],
"paper_content_text": [
"Introduction Recurrent neural networks (RNNs) are remarkably effective models of sequential data.",
"Recent years have witnessed the widespread adoption of recurrent architectures such as LSTMs (Hochreiter and Schmidhuber, 1997) in various NLP tasks, with state of the art results in language modeling and conditional generation tasks like machine translation (Bahdanau et al., 2015) and text summarization (See et al., 2017) .",
"Here we revisit the question asked by Linzen et al.",
"(2016) : as RNNs model word sequences without explicit notions of hierarchical structure, Figure 1 : An example of the number agreement task with two attractors and a subject-verb distance of five.",
"to what extent are these models able to learn non-local syntactic dependencies in natural language?",
"Identifying number agreement between subjects and verbs-especially in the presence of attractors-can be understood as a cognitivelymotivated probe that seeks to distinguish hierarchical theories from sequential ones, as models that rely on sequential cues like the most recent noun would favor the incorrect verb form.",
"We provide an example of this task in Fig.",
"1 , where the plural form of the verb have agrees with the distant subject parts, rather than the adjacent attractors (underlined) of the singular form.",
"Contrary to the findings of Linzen et al.",
"(2016) , our experiments suggest that sequential LSTMs are able to capture structural dependencies to a large extent, even for cases with multiple attractors ( §2).",
"Our finding suggests that network capacity plays a crucial role in capturing structural dependencies with multiple attractors.",
"Nevertheless, we find that a strong character LSTM language model-which lacks explicit word representation and has to capture much longer sequential dependencies in order to learn non-local structural dependencies effectively-performs much worse in the number agreement task.",
"Given the strong performance of word-based LSTM language models, are there are any substantial benefits, in terms of number agreement accuracy, to explicitly modeling hierarchical structures as an inductive bias?",
"We discover that a certain class of LSTM language models that explicitly models syntactic structures, the recurrent neural network grammars (Dyer et al., 2016, RN-NGs) , considerably outperforms sequential LSTM language models for cases with multiple attractors ( §3).",
"We present experiments affirming that this gain is due to an explicit composition operator rather than the presence of predicted syntactic annotations.",
"Rather surprisingly, syntactic LSTM language models without explicit composition have no advantage over sequential LSTMs that operate on word sequences, although these models can nevertheless be excellent predictors of phrase structures (Choe and Charniak, 2016) .",
"Having established the importance of modeling structures, we explore the hypothesis that how we build the structure affects the model's ability to identify structural dependencies in English.",
"As RNNGs build phrase-structure trees through top-down operations, we propose extensions to the structure-building sequences and model architecture that enable left-corner (Henderson, 2003 (Henderson, , 2004 and bottom-up (Chelba and Jelinek, 2000; Emami and Jelinek, 2005) generation orders ( §4).",
"Extensive prior work has characterized topdown, left-corner, and bottom-up parsing strategies in terms of cognitive plausibility (Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992) and neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we move away from the realm of parsing and evaluate the three strategies as models of generation instead, and address the following empirical question: which generation order is most appropriately biased to model structural dependencies in English, as indicated by number agreement accuracy?",
"Our key finding is that the top-down generation outperforms left-corner and bottom-up variants for difficult cases with multiple attractors.",
"In theory, the three traversal strategies approximate the same chain rule that decompose the joint probability of words and phrase-structure trees, denoted as p(x, y), differently and as such will impose different biases on the learner.",
"In §4.3, we show that the three variants achieve similar perplexities on a held-out validation set.",
"As we observe different patterns in number agreement, this demonstrates that while perplexity can be a useful diagnostic tool, it may not be sensitive enough for comparing models in terms of how well they capture grammatical intuitions.",
"Number Agreement with LSTM Language Models We revisit the number agreement task with LSTMs trained on language modeling objectives, as proposed by Linzen et al.",
"(2016) .",
"Experimental Settings.",
"We use the same parsed Wikipedia corpus, verb inflectors, preprocessing steps, and dataset split as Linzen et al.",
"(2016) .",
"1 Word types beyond the most frequent 10,000 are converted to their respective POS tags.",
"We summarize the corpus statistics of the dataset, along with the test set distribution of the number of attractors, in Table 1 .",
"Similar to Linzen et al.",
"(2016) , we only include test cases where all intervening nouns are of the opposite number forms than the subject noun.",
"All models are implemented using the DyNet library (Neubig et al., 2017) .",
"Training was done using a language modeling objective that predicts the next word given the prefix; at test time we compute agreement error rates by comparing the probability of the correct verb form with the incorrect one.",
"We report performance of a few different LSTM hidden layer configurations, while other hyper-parameters are selected based on a grid search.",
"2 Following Linzen Table 2 indicates that, given enough capacity, LSTM language models without explicit syntactic supervision are able to perform well in number agreement.",
"For cases with multiple attractors, we observe that the LSTM language model with 50 hidden units trails behind its larger counterparts by a substantial margin despite comparable performance for zero attractor cases, suggesting that network capacity plays an especially important role in propagating relevant structural information across a large number of steps.",
"5 Our experiment independently derives the same finding as the recent work of Gulordava et al.",
"(2018) , who also find that LSTMs trained with language modeling objectives are able to learn number agreement well; here we additionally identify model capacity as one of the reasons for the discrepancy with the Linzen et al.",
"(2016) results.",
"While the pretrained large-scale language model of Jozefowicz et al.",
"(2016) has certain advantages in terms of model capacity, more training data, and richer vocabulary, we suspect that the poorer performance is due to differences between their training domain and the number agreement testing domain, although the model still performs reasonably well in the number agreement test set.",
"Prior work has confirmed the notion that, in many cases, statistical models are able to achieve good performance under some aggregate metric by overfitting to patterns that are predictive in most cases, often at the expense of more difficult, infrequent instances that require deeper language understanding abilities (Rimell et al., 2009; Jia and Liang, 2017) .",
"In the vast majority of cases, structural dependencies between subjects and verbs highly overlap with sequential dependencies (Table 1).",
"Nevertheless, the fact that number agreement accuracy gets worse as the number of attractors increases is consistent with a sequential recency bias in LSTMs: under this conjecture, identifying the correct structural dependency becomes harder when there are more adjacent nouns of different number forms than the true subject.",
"If the sequential recency conjecture is correct, then LSTMs would perform worse when the structural dependency is more distant in the sequences, compared to cases where the structural dependency is more adjacent.",
"We empirically test this conjecture by running a strong character-based LSTM language model of that achieved state of the art results on EnWiki8 from the Hutter Prize dataset (Hutter, 2012) , with 1,800 hidden units and 10 million parameters.",
"The character LSTM is trained, validated, and tested 6 on the same split of the Linzen et al.",
"(2016) number agreement dataset.",
"A priori, we expect that number agreement is harder for character LSTMs for two reasons.",
"First, character LSTMs lack explicit word representaforms much better for cases with multiple attractors.",
"tions, thus succeeding in this task requires identifying structural dependencies between two sequences of character tokens, while word-based LSTMs only need to resolve dependencies between word tokens.",
"Second, by nature of modeling characters, non-local structural dependencies are sequentially further apart than in the wordbased language model.",
"On the other hand, character LSTMs have the ability to exploit and share informative morphological cues, such as the fact that plural nouns in English tend to end with 's'.",
"As demonstrated on the last row of Table 2 , we find that the character LSTM language model performs much worse at number agreement with multiple attractors compared to its word-based counterparts.",
"This finding is consistent with that of Sennrich (2017) , who find that character-level decoders in neural machine translation perform worse than subword models in capturing morphosyntactic agreement.",
"To some extent, our finding demonstrates the limitations that character LSTMs face in learning structure from language modeling objectives, despite earlier evidence that character LSTM language models are able to implicitly acquire a lexicon (Le Godais et al., 2017) .",
"Number Agreement with RNNGs Given the strong performance of sequential LSTMs in number agreement, is there any further benefit to explicitly modeling hierarchical structures?",
"We focus on recurrent neural network grammars (Dyer et al., 2016, RNNGs) , which jointly model the probability of phrase-structure trees and strings, p(x, y), through structurebuilding actions and explicit compositions for representing completed constituents.",
"Our choice of RNNGs is motivated by the findings of Kuncoro et al.",
"(2017) , who find evidence for syntactic headedness in RNNG phrasal representations.",
"Intuitively, the ability to learn heads is beneficial for this task, as the representation for the noun phrase \"The flowers in the vase\" would be similar to the syntactic head flowers rather than vase.",
"In some sense, the composition operator can be understood as injecting a structural recency bias into the model design, as subjects and verbs that are sequentially apart are encouraged to be close together in the RNNGs' representation.",
"Recurrent Neural Network Grammars RNNGs (Dyer et al., 2016) are language models that estimate the joint probability of string terminals and phrase-structure tree nonterminals.",
"Here we use stack-only RNNGs that achieve better perplexity and parsing performance (Kuncoro et al., 2017) .",
"Given the current stack configuration, the objective function of RNNGs is to predict the correct structure-building operation according to a top-down, left-to-right traversal of the phrasestructure tree; a partial traversal for the input sentence \"The flowers in the vase are blooming\" is illustrated in Fig.",
"3(a) .",
"7 The structural inductive bias of RNNGs derives from an explicit composition operator that represents completed constituents; for instance, the constituent (NP The flowers) is represented by a single composite element on the stack, rather than as four separate symbols.",
"During each REDUCE action, the topmost stack elements that belong to the new constituent are popped from the stack and then composed by the composition function; the composed symbol is then pushed back into the stack.",
"The model is trained in an end-to-end manner by minimizing the cross-entropy loss relative to a sample of gold trees.",
"Experiments Here we summarize the experimental settings of running RNNGs on the number agreement dataset and discuss the empirical findings.",
"Experimental settings.",
"We obtain phrasestructure trees for the Linzen et al.",
"(2016) dataset using a publicly available discriminative model 8 trained on the Penn Treebank (Marcus et al., 1993) .",
"At training time, we use these predicted trees to derive action sequences on the training set, and train the RNNG model on these sequences.",
"9 At test time, we compare the probabilities of the correct and incorrect verb forms given the prefix, which now includes both nonterminal and terminal symbols.",
"An example of the stack contents (i.e.",
"the prefix) when predicting the verb is provided in Fig.",
"3(a) .",
"We similarly run a grid search over the same hyper-parameter range as the sequential LSTM and compare the results with the strongest sequential LSTM baseline from §2.",
"Figure 2 : Number agreement error rates for sequential LSTM language models (left), sequential syntactic LSTM language models (Choe and Charniak, 2016, center), and RNNGs (right).",
"Discussion.",
"Fig.",
"2 shows that RNNGs (rightmost) achieve much better number agreement accuracy compared to LSTM language models (leftmost) for difficult cases with four and five attractors, with around 30% error rate reductions, along with a 13% error rate reduction (from 9% to 7.8%) for three attractors.",
"We attribute the slightly worse performance of RNNGs on cases with zero and one attractor to the presence of intervening structure-building actions that separate the subject and the verb, such as a REDUCE (step 6 in Fig.",
"3(a) ) action to complete the noun phrase and at least one action to predict a verb phrase (step 15 in Fig.",
"3(a) ) before the verb itself is introduced, while LSTM language models benefit from shorter dependencies for zero and one attractor cases.",
"The performance gain of RNNGs might arise from two potential causes.",
"First, RNNGs have access to predicted syntactic annotations, while LSTM language models operate solely on word sequences.",
"Second, RNNGs incorporate explicit compositions, which encourage hierarhical representations and potentially the discovery of syntactic (rather than sequential) dependencies.",
"Would LSTMs that have access to syntactic annotations, but without the explicit composition function, benefit from the same performance gain as RNNGs?",
"To answer this question, we run sequential LSTMs over the same phrase-structure trees (Choe and Charniak, 2016) , similarly estimating the joint probability of phrase-structure nonterminals and string terminals but without an explicit composition operator.",
"Taking the example in Fig.",
"3(a) , the sequential syntactic LSTM would have fifteen 10 symbols on the LSTM when predicting the verb, as opposed to three symbols in the case of RNNGs' stack LSTM.",
"In theory, the sequential LSTM over the phrase-structure trees (Choe and Charniak, 2016) may be able to incorporate a similar, albeit implicit, composition process as RNNGs and consequently derive similarly syntactic heads, although there is no inductive bias that explicitly encourages such process.",
"Fig.",
"2 suggests that the sequential syntactic LSTMs (center) perform comparably with sequential LSTMs without syntax for multiple attractor cases, and worse than RNNGs for nearly all attractors; the gap is highest for multiple attractors.",
"This result showcases the importance of an explicit composition operator and hierarchical representations in identifying structural dependencies, as indicated by number agreement accuracy.",
"Our finding is consistent with the recent work of Yogatama et al.",
"(2018) , who find that introducing elements of hierarchical modeling through a stackstructured memory is beneficial for number agreement, outperforming LSTM language models and attention-augmented variants by increasing margins as the number of attractor grows.",
"Further Analysis In order to better interpret the results, we conduct further analysis into the perplexities of each model, followed by a discussion on the effect of incrementality constraints on the RNNG when predicting number agreement.",
"Perplexity.",
"To what extent does the success of RNNGs in the number agreement task with multiple attractors correlate with better performance under the perplexity metric?",
"We answer this question by using an importance sampling marginalization procedure (Dyer et al., 2016) to obtain an estimate of p(x) under both RNNGs and the sequential syntactic LSTM model.",
"Following Dyer et al.",
"(2016) , for each sentence on the validation set we sample 100 candidate trees from a discriminative model 11 as our proposal distribution.",
"As demonstrated in Table 3 , the LSTM language model has the lowest validation set perplexity despite substantially worse performance than RN-NGs in number agreement with multiple attractors, suggesting that lower perplexity is not neces-sarily correlated with number agreement success.",
"Incrementality constraints.",
"As the syntactic prefix was derived from a discriminative model that has access to unprocessed words, one potential concern is that this prefix might violate the incrementality constraints and benefit the RNNG over the LSTM language model.",
"To address this concern, we remark that the empirical evidence from Fig.",
"2 and Table 3 indicates that the LSTM language model without syntactic annotation outperforms the sequential LSTM with syntactic annotation in terms of both perplexity and number agreement throughout nearly all attractor settings, suggesting that the predicted syntactic prefix does not give any unfair advantage to the syntactic models.",
"Furthermore, we run an experiment where the syntactic prefix is instead derived from an incremental beam search procedure of Fried et al.",
"(2017) .",
"12 To this end, we take the highest scoring beam entry at the time that the verb is generated to be the syntactic prefix; this procedure is applied to both the correct and incorrect verb forms.",
"13 We then similarly compare the probabilities of the correct and incorrect verb form given each respective syntactic prefix to obtain number agreement accuracy.",
"Our finding suggests that using the fully incremental tree prefix leads to even better RNNG number agreement performance for four and five attractors, achieving 7.1% and 8.2% error rates, respectively, compared to 9.4% and 12% for the RNNG error rates in Fig.",
"2 .",
"Top-Down, Left-Corner, and Bottom-Up Traversals In this section, we propose two new variants of RNNGs that construct trees using a different con- 12 As the beam search procedure is time-consuming, we randomly sample 500 cases for each attractor and compute the number agreement accuracy on these samples.",
"13 Consequently, the correct and incorrect forms of the sentence might have different partial trees, as the highest scoring beam entries may be different for each alternative.",
"struction order than the top-down, left-to-right order used above.",
"These are a bottom-up construction order ( §4.1) and a left-corner construction order ( §4.2), analogous to the well-known parsing strategies (e.g.",
"Hale, 2014, chapter 3).",
"They differ from these classic strategies insofar as they do not announce the phrase-structural content of an entire branch at the same time, adopting instead a node-by-node enumeration reminescent of Markov Grammars (Charniak, 1997) .",
"This stepby-step arrangement extends to the derived string as well; since all variants generate words from left to right, the models can be compared using number agreement as a diagnostic.",
"14 Here we state our hypothesis on why the build order matters.",
"The three generation strategies represent different chain rule decompositions of the joint probability of strings and phrase-structure trees, thereby imposing different biases on the learner.",
"Earlier work in parsing has characterized the plausibility of top-down, left-corner, and bottom-up strategies as viable candidates of human sentence processing, especially in terms of memory constraints and human difficulties with center embedding constructions (Johnson-Laird, 1983; Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992 , inter alia), along with neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we cast the three strategies as models of language generation (Manning and Carpenter, 1997), and focus on the empirical question: which generation order has the most appropriate bias in modeling non-local structural dependencies in English?",
"These alternative orders organize the learning problem so as to yield intermediate states in generation that condition on different aspects of the grammatical structure.",
"In number agreement, this amounts to making an agreement controller, such as the word flowers in Fig.",
"3 , more or less salient.",
"If it is more salient, the model should be better-able to inflect the main verb in agreement with this controller, without getting distracted by the attractors.",
"The three proposed build orders are compared in Fig.",
"3 , showing the respective configurations (i.e.",
"the prefix) when generating the main verb in a sentence with a single attractor.",
"15 In ad-dition, we show concrete action sequences for a simpler sentence in each section.",
"Bottom-Up Traversal In bottom-up traversals, phrases are recursively constructed and labeled with the nonterminal type once all their daughters have been built, as illustrated in Fig.",
"4 .",
"Bottom-up traversals benefit from shorter stack depths compared to top-down due to the lack of incomplete nonterminals.",
"As the commitment to label the nonterminal type of a phrase is delayed until its constituents are complete, this means that the generation of a child node cannot condition on the label of its parent node.",
"In n-ary branching trees, bottom-up completion of constituents requires a procedure for determining how many of the most recent elements on the stack should be daughters of the node that is being constructed.",
"16 Conceptually, rather than having a single REDUCE operation as we have before, we have a complex REDUCE(X, n) operation that must determine the type of the constituent (i.e., X) as well as the number of daughters (i.e., n).",
"In step 5 of Fig.",
"4 , the newly formed NP constituent only covers the terminal worms, and neither the unattached terminal eats nor the constituent (NP The fox) is part of the new noun phrase.",
"We implement this extent decision using a stick-breaking construction-using the stack LSTM encoding, a single-layer feedforward network, and a logistic output layer-which decides whether the top element on the stack should be the leftmost child of the new constituent (i.e.",
"whether or not the new constituent is complete after popping the current topmost stack element), as illustrated in Fig.",
"5 .",
"If not, the process is then repeated after the topmost stack element is popped.",
"Once the extent of the new nonterminal has been decided, we parameterize the decision of the nonterminal label type; in Fig.",
"5 this is an NP.",
"A second difference to top-down generation is that when a single constituent remains on the stack, the sentence is not necessarily complete (see step 3 of Fig.",
"4 for examples where this happens).",
"We thus introduce an explicit STOP action (step 8, Fig.",
"4) , indicating the tree is complete, which is only assigned non-zero probability when the stack has a during the history of the full generation process vary considerably in the invariances and the kinds of actions they predict.",
"16 This mechanism is not necessary with strictly binary branching trees, since each new nonterminal always consists of the two children at the top of the stack.",
"Left-Corner Traversal Left-corner traversals combine some aspects of top-down and bottom-up processing.",
"As illustrated in Fig.",
"6 , this works by first generating the leftmost terminal of the tree, The (step 0), before proceeding bottom-up to predict its parent NP (step 1) and then top-down to predict the rest of its children (step 2).",
"A REDUCE action similarly calls the composition operator once the phrase is complete (e.g.",
"step 3).",
"The complete constituent (NP The fox) is the leftmost child of its parent node, thus an NT SW(S) action is done next (step 4).",
"The NT SW(X) action is similar to the NT(X) from the top-down generator, in that it introduces an open nonterminal node and must be matched later by a corresponding REDUCE operation, but, in addition, swaps the two topmost elements at the top of the stack.",
"This is necessary because the parent nonterminal node is not built until after its left-most child has been constructed.",
"In step 1 of Fig.",
"6 , with a single element The on the stack, the action NT SW(NP) adds the open nonterminal symbol NP to become the topmost stack element, but after applying the swap operator the stack now contains (NP | The (step 2).",
"Experiments We optimize the hyper-parameters of each RNNG variant using grid searches based on validation set perplexity.",
"Table 4 summarizes average stack depths and perplexities 17 on the Linzen et al.",
"(2016) validation set.",
"We evaluate each of the variants in terms of number agreement accuracy as an evidence of its suitability to model structural dependencies in English, presented in Table 5 .",
"To account for randomness in training, we report the error rate summary statistics of ten different runs.",
"Avg.",
"(±sdev)/min/max n=2 n=3 n=4 LM 5.8(±0.2)/5.5/6.0 9.6(±0.7)/8.8/10.1 14.1(±1.2)/13.0/15.3 TD 5.5(±0.4)/4.9/5.8 7.8(±0.6)/7.4/8.0 8.9(±1.1)/7.9/9.8 LC 5.4(±0.3)/5.2/5.5 8.2(±0.4)/7.9/8.7 9.9(±1.3)/8.8/11.5 BU 5.7(±0.3) 5.5/5.8 8.5(±0.7)/8.0/9.3 9.7(±1.1)/9.0/11.3 Table 5 : Number agreement error rates for topdown (TD), left-corner (LC), and bottom-up (BU) RNNGs, broken down by the number of attractors.",
"LM indicates the best sequential language model baseline ( §2).",
"We report the mean, standard deviation, and minimum/maximum of 10 different random seeds of each model.",
"Discussion.",
"In Table 5 , we focus on empirical results for cases where the structural dependencies matter the most, corresponding to cases with two, three, and four attractors.",
"All three RNNG variants outperform the sequential LSTM language model baseline for these cases.",
"Nevertheless, the top-down variant outperforms both left-corner and bottom-up strategies for difficult cases with three or more attractors, suggesting that the top-down strategy is most appropriately biased to model difficult number agreement dependencies in English.",
"We run an approximate randomization test by stratifying the output and permuting within each stratum (Yeh, 2000) and find that, for four attractors, the performance difference between the top-down RNNG and the other variants is statistically significant at p < 0.05.",
"The success of the top-down traversal in the domain of number-agreement prediction is consistent with a classical view in parsing that argues top-down parsing is the most human-like parsing strategy since it is the most anticipatory.",
"Only anticipatory representations, it is said, could explain the rapid, incremental processing that humans seem to exhibit (Marslen-Wilson, 1973; Tanenhaus et al., 1995) ; this line of thinking similarly motivates Charniak (2010) , among others.",
"While most work in this domain has been concerned with the parsing problem, our findings suggest that anticipatory mechanisms are also beneficial in capturing structural dependencies in language modeling.",
"We note that our results are achieved using models that, in theory, are able to condition on the entire derivation history, while earlier work in sentence processing has focused on cognitive memory considerations, such as the memory-bounded model of Schuler et al.",
"(2010) .",
"Conclusion Given enough capacity, LSTMs trained on language modeling objectives are able to learn syntax-sensitive dependencies, as evidenced by accurate number agreement accuracy with multiple attractors.",
"Despite this strong performance, we discover explicit modeling of structure does improve the model's ability to discover non-local structural dependencies when determining the distribution over subsequent word generation.",
"Recurrent neural network grammars (RNNGs), which jointly model phrase-structure trees and strings and employ an explicit composition operator, substantially outperform LSTM language models and syntactic language models without explicit compositions; this highlights the importance of a hierarchical inductive bias in capturing structural dependencies.",
"We explore the possibility that how the structure is built affects number agreement performance.",
"Through novel extensions to RNNGs that enable the use of left-corner and bottom-up generation strategies, we discover that this is indeed the case: the three RNNG variants have different generalization properties for number agreement, with the top-down traversal strategy performing best for cases with multiple attractors."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"4.3",
"5"
],
"paper_header_content": [
"Introduction",
"Number Agreement with LSTM Language Models",
"Number Agreement with RNNGs",
"Recurrent Neural Network Grammars",
"Experiments",
"Further Analysis",
"Top-Down, Left-Corner, and Bottom-Up Traversals",
"Bottom-Up Traversal",
"Left-Corner Traversal",
"Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-30#paper-1041#slide-1 | Number Agreement | Num ber agreement example with
LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | Num ber agreement example with
LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | [] |
GEM-SciDuet-train-30#paper-1041#slide-2 | 1041 | LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modeling Structure Makes Them Better | Language exhibits hierarchical structure, but recent work using a subject-verb agreement diagnostic argued that state-ofthe-art language models, LSTMs, fail to learn long-range syntax-sensitive dependencies. Using the same diagnostic, we show that, in fact, LSTMs do succeed in learning such dependencies-provided they have enough capacity. We then explore whether models that have access to explicit syntactic information learn agreement more effectively, and how the way in which this structural information is incorporated into the model impacts performance. We find that the mere presence of syntactic information does not improve accuracy, but when model architecture is determined by syntax, number agreement is improved. Further, we find that the choice of how syntactic structure is built affects how well number agreement is learned: top-down construction outperforms leftcorner and bottom-up variants in capturing long-distance structural dependencies. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189
],
"paper_content_text": [
"Introduction Recurrent neural networks (RNNs) are remarkably effective models of sequential data.",
"Recent years have witnessed the widespread adoption of recurrent architectures such as LSTMs (Hochreiter and Schmidhuber, 1997) in various NLP tasks, with state of the art results in language modeling and conditional generation tasks like machine translation (Bahdanau et al., 2015) and text summarization (See et al., 2017) .",
"Here we revisit the question asked by Linzen et al.",
"(2016) : as RNNs model word sequences without explicit notions of hierarchical structure, Figure 1 : An example of the number agreement task with two attractors and a subject-verb distance of five.",
"to what extent are these models able to learn non-local syntactic dependencies in natural language?",
"Identifying number agreement between subjects and verbs-especially in the presence of attractors-can be understood as a cognitivelymotivated probe that seeks to distinguish hierarchical theories from sequential ones, as models that rely on sequential cues like the most recent noun would favor the incorrect verb form.",
"We provide an example of this task in Fig.",
"1 , where the plural form of the verb have agrees with the distant subject parts, rather than the adjacent attractors (underlined) of the singular form.",
"Contrary to the findings of Linzen et al.",
"(2016) , our experiments suggest that sequential LSTMs are able to capture structural dependencies to a large extent, even for cases with multiple attractors ( §2).",
"Our finding suggests that network capacity plays a crucial role in capturing structural dependencies with multiple attractors.",
"Nevertheless, we find that a strong character LSTM language model-which lacks explicit word representation and has to capture much longer sequential dependencies in order to learn non-local structural dependencies effectively-performs much worse in the number agreement task.",
"Given the strong performance of word-based LSTM language models, are there are any substantial benefits, in terms of number agreement accuracy, to explicitly modeling hierarchical structures as an inductive bias?",
"We discover that a certain class of LSTM language models that explicitly models syntactic structures, the recurrent neural network grammars (Dyer et al., 2016, RN-NGs) , considerably outperforms sequential LSTM language models for cases with multiple attractors ( §3).",
"We present experiments affirming that this gain is due to an explicit composition operator rather than the presence of predicted syntactic annotations.",
"Rather surprisingly, syntactic LSTM language models without explicit composition have no advantage over sequential LSTMs that operate on word sequences, although these models can nevertheless be excellent predictors of phrase structures (Choe and Charniak, 2016) .",
"Having established the importance of modeling structures, we explore the hypothesis that how we build the structure affects the model's ability to identify structural dependencies in English.",
"As RNNGs build phrase-structure trees through top-down operations, we propose extensions to the structure-building sequences and model architecture that enable left-corner (Henderson, 2003 (Henderson, , 2004 and bottom-up (Chelba and Jelinek, 2000; Emami and Jelinek, 2005) generation orders ( §4).",
"Extensive prior work has characterized topdown, left-corner, and bottom-up parsing strategies in terms of cognitive plausibility (Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992) and neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we move away from the realm of parsing and evaluate the three strategies as models of generation instead, and address the following empirical question: which generation order is most appropriately biased to model structural dependencies in English, as indicated by number agreement accuracy?",
"Our key finding is that the top-down generation outperforms left-corner and bottom-up variants for difficult cases with multiple attractors.",
"In theory, the three traversal strategies approximate the same chain rule that decompose the joint probability of words and phrase-structure trees, denoted as p(x, y), differently and as such will impose different biases on the learner.",
"In §4.3, we show that the three variants achieve similar perplexities on a held-out validation set.",
"As we observe different patterns in number agreement, this demonstrates that while perplexity can be a useful diagnostic tool, it may not be sensitive enough for comparing models in terms of how well they capture grammatical intuitions.",
"Number Agreement with LSTM Language Models We revisit the number agreement task with LSTMs trained on language modeling objectives, as proposed by Linzen et al.",
"(2016) .",
"Experimental Settings.",
"We use the same parsed Wikipedia corpus, verb inflectors, preprocessing steps, and dataset split as Linzen et al.",
"(2016) .",
"1 Word types beyond the most frequent 10,000 are converted to their respective POS tags.",
"We summarize the corpus statistics of the dataset, along with the test set distribution of the number of attractors, in Table 1 .",
"Similar to Linzen et al.",
"(2016) , we only include test cases where all intervening nouns are of the opposite number forms than the subject noun.",
"All models are implemented using the DyNet library (Neubig et al., 2017) .",
"Training was done using a language modeling objective that predicts the next word given the prefix; at test time we compute agreement error rates by comparing the probability of the correct verb form with the incorrect one.",
"We report performance of a few different LSTM hidden layer configurations, while other hyper-parameters are selected based on a grid search.",
"2 Following Linzen Table 2 indicates that, given enough capacity, LSTM language models without explicit syntactic supervision are able to perform well in number agreement.",
"For cases with multiple attractors, we observe that the LSTM language model with 50 hidden units trails behind its larger counterparts by a substantial margin despite comparable performance for zero attractor cases, suggesting that network capacity plays an especially important role in propagating relevant structural information across a large number of steps.",
"5 Our experiment independently derives the same finding as the recent work of Gulordava et al.",
"(2018) , who also find that LSTMs trained with language modeling objectives are able to learn number agreement well; here we additionally identify model capacity as one of the reasons for the discrepancy with the Linzen et al.",
"(2016) results.",
"While the pretrained large-scale language model of Jozefowicz et al.",
"(2016) has certain advantages in terms of model capacity, more training data, and richer vocabulary, we suspect that the poorer performance is due to differences between their training domain and the number agreement testing domain, although the model still performs reasonably well in the number agreement test set.",
"Prior work has confirmed the notion that, in many cases, statistical models are able to achieve good performance under some aggregate metric by overfitting to patterns that are predictive in most cases, often at the expense of more difficult, infrequent instances that require deeper language understanding abilities (Rimell et al., 2009; Jia and Liang, 2017) .",
"In the vast majority of cases, structural dependencies between subjects and verbs highly overlap with sequential dependencies (Table 1).",
"Nevertheless, the fact that number agreement accuracy gets worse as the number of attractors increases is consistent with a sequential recency bias in LSTMs: under this conjecture, identifying the correct structural dependency becomes harder when there are more adjacent nouns of different number forms than the true subject.",
"If the sequential recency conjecture is correct, then LSTMs would perform worse when the structural dependency is more distant in the sequences, compared to cases where the structural dependency is more adjacent.",
"We empirically test this conjecture by running a strong character-based LSTM language model of that achieved state of the art results on EnWiki8 from the Hutter Prize dataset (Hutter, 2012) , with 1,800 hidden units and 10 million parameters.",
"The character LSTM is trained, validated, and tested 6 on the same split of the Linzen et al.",
"(2016) number agreement dataset.",
"A priori, we expect that number agreement is harder for character LSTMs for two reasons.",
"First, character LSTMs lack explicit word representaforms much better for cases with multiple attractors.",
"tions, thus succeeding in this task requires identifying structural dependencies between two sequences of character tokens, while word-based LSTMs only need to resolve dependencies between word tokens.",
"Second, by nature of modeling characters, non-local structural dependencies are sequentially further apart than in the wordbased language model.",
"On the other hand, character LSTMs have the ability to exploit and share informative morphological cues, such as the fact that plural nouns in English tend to end with 's'.",
"As demonstrated on the last row of Table 2 , we find that the character LSTM language model performs much worse at number agreement with multiple attractors compared to its word-based counterparts.",
"This finding is consistent with that of Sennrich (2017) , who find that character-level decoders in neural machine translation perform worse than subword models in capturing morphosyntactic agreement.",
"To some extent, our finding demonstrates the limitations that character LSTMs face in learning structure from language modeling objectives, despite earlier evidence that character LSTM language models are able to implicitly acquire a lexicon (Le Godais et al., 2017) .",
"Number Agreement with RNNGs Given the strong performance of sequential LSTMs in number agreement, is there any further benefit to explicitly modeling hierarchical structures?",
"We focus on recurrent neural network grammars (Dyer et al., 2016, RNNGs) , which jointly model the probability of phrase-structure trees and strings, p(x, y), through structurebuilding actions and explicit compositions for representing completed constituents.",
"Our choice of RNNGs is motivated by the findings of Kuncoro et al.",
"(2017) , who find evidence for syntactic headedness in RNNG phrasal representations.",
"Intuitively, the ability to learn heads is beneficial for this task, as the representation for the noun phrase \"The flowers in the vase\" would be similar to the syntactic head flowers rather than vase.",
"In some sense, the composition operator can be understood as injecting a structural recency bias into the model design, as subjects and verbs that are sequentially apart are encouraged to be close together in the RNNGs' representation.",
"Recurrent Neural Network Grammars RNNGs (Dyer et al., 2016) are language models that estimate the joint probability of string terminals and phrase-structure tree nonterminals.",
"Here we use stack-only RNNGs that achieve better perplexity and parsing performance (Kuncoro et al., 2017) .",
"Given the current stack configuration, the objective function of RNNGs is to predict the correct structure-building operation according to a top-down, left-to-right traversal of the phrasestructure tree; a partial traversal for the input sentence \"The flowers in the vase are blooming\" is illustrated in Fig.",
"3(a) .",
"7 The structural inductive bias of RNNGs derives from an explicit composition operator that represents completed constituents; for instance, the constituent (NP The flowers) is represented by a single composite element on the stack, rather than as four separate symbols.",
"During each REDUCE action, the topmost stack elements that belong to the new constituent are popped from the stack and then composed by the composition function; the composed symbol is then pushed back into the stack.",
"The model is trained in an end-to-end manner by minimizing the cross-entropy loss relative to a sample of gold trees.",
"Experiments Here we summarize the experimental settings of running RNNGs on the number agreement dataset and discuss the empirical findings.",
"Experimental settings.",
"We obtain phrasestructure trees for the Linzen et al.",
"(2016) dataset using a publicly available discriminative model 8 trained on the Penn Treebank (Marcus et al., 1993) .",
"At training time, we use these predicted trees to derive action sequences on the training set, and train the RNNG model on these sequences.",
"9 At test time, we compare the probabilities of the correct and incorrect verb forms given the prefix, which now includes both nonterminal and terminal symbols.",
"An example of the stack contents (i.e.",
"the prefix) when predicting the verb is provided in Fig.",
"3(a) .",
"We similarly run a grid search over the same hyper-parameter range as the sequential LSTM and compare the results with the strongest sequential LSTM baseline from §2.",
"Figure 2 : Number agreement error rates for sequential LSTM language models (left), sequential syntactic LSTM language models (Choe and Charniak, 2016, center), and RNNGs (right).",
"Discussion.",
"Fig.",
"2 shows that RNNGs (rightmost) achieve much better number agreement accuracy compared to LSTM language models (leftmost) for difficult cases with four and five attractors, with around 30% error rate reductions, along with a 13% error rate reduction (from 9% to 7.8%) for three attractors.",
"We attribute the slightly worse performance of RNNGs on cases with zero and one attractor to the presence of intervening structure-building actions that separate the subject and the verb, such as a REDUCE (step 6 in Fig.",
"3(a) ) action to complete the noun phrase and at least one action to predict a verb phrase (step 15 in Fig.",
"3(a) ) before the verb itself is introduced, while LSTM language models benefit from shorter dependencies for zero and one attractor cases.",
"The performance gain of RNNGs might arise from two potential causes.",
"First, RNNGs have access to predicted syntactic annotations, while LSTM language models operate solely on word sequences.",
"Second, RNNGs incorporate explicit compositions, which encourage hierarhical representations and potentially the discovery of syntactic (rather than sequential) dependencies.",
"Would LSTMs that have access to syntactic annotations, but without the explicit composition function, benefit from the same performance gain as RNNGs?",
"To answer this question, we run sequential LSTMs over the same phrase-structure trees (Choe and Charniak, 2016) , similarly estimating the joint probability of phrase-structure nonterminals and string terminals but without an explicit composition operator.",
"Taking the example in Fig.",
"3(a) , the sequential syntactic LSTM would have fifteen 10 symbols on the LSTM when predicting the verb, as opposed to three symbols in the case of RNNGs' stack LSTM.",
"In theory, the sequential LSTM over the phrase-structure trees (Choe and Charniak, 2016) may be able to incorporate a similar, albeit implicit, composition process as RNNGs and consequently derive similarly syntactic heads, although there is no inductive bias that explicitly encourages such process.",
"Fig.",
"2 suggests that the sequential syntactic LSTMs (center) perform comparably with sequential LSTMs without syntax for multiple attractor cases, and worse than RNNGs for nearly all attractors; the gap is highest for multiple attractors.",
"This result showcases the importance of an explicit composition operator and hierarchical representations in identifying structural dependencies, as indicated by number agreement accuracy.",
"Our finding is consistent with the recent work of Yogatama et al.",
"(2018) , who find that introducing elements of hierarchical modeling through a stackstructured memory is beneficial for number agreement, outperforming LSTM language models and attention-augmented variants by increasing margins as the number of attractor grows.",
"Further Analysis In order to better interpret the results, we conduct further analysis into the perplexities of each model, followed by a discussion on the effect of incrementality constraints on the RNNG when predicting number agreement.",
"Perplexity.",
"To what extent does the success of RNNGs in the number agreement task with multiple attractors correlate with better performance under the perplexity metric?",
"We answer this question by using an importance sampling marginalization procedure (Dyer et al., 2016) to obtain an estimate of p(x) under both RNNGs and the sequential syntactic LSTM model.",
"Following Dyer et al.",
"(2016) , for each sentence on the validation set we sample 100 candidate trees from a discriminative model 11 as our proposal distribution.",
"As demonstrated in Table 3 , the LSTM language model has the lowest validation set perplexity despite substantially worse performance than RN-NGs in number agreement with multiple attractors, suggesting that lower perplexity is not neces-sarily correlated with number agreement success.",
"Incrementality constraints.",
"As the syntactic prefix was derived from a discriminative model that has access to unprocessed words, one potential concern is that this prefix might violate the incrementality constraints and benefit the RNNG over the LSTM language model.",
"To address this concern, we remark that the empirical evidence from Fig.",
"2 and Table 3 indicates that the LSTM language model without syntactic annotation outperforms the sequential LSTM with syntactic annotation in terms of both perplexity and number agreement throughout nearly all attractor settings, suggesting that the predicted syntactic prefix does not give any unfair advantage to the syntactic models.",
"Furthermore, we run an experiment where the syntactic prefix is instead derived from an incremental beam search procedure of Fried et al.",
"(2017) .",
"12 To this end, we take the highest scoring beam entry at the time that the verb is generated to be the syntactic prefix; this procedure is applied to both the correct and incorrect verb forms.",
"13 We then similarly compare the probabilities of the correct and incorrect verb form given each respective syntactic prefix to obtain number agreement accuracy.",
"Our finding suggests that using the fully incremental tree prefix leads to even better RNNG number agreement performance for four and five attractors, achieving 7.1% and 8.2% error rates, respectively, compared to 9.4% and 12% for the RNNG error rates in Fig.",
"2 .",
"Top-Down, Left-Corner, and Bottom-Up Traversals In this section, we propose two new variants of RNNGs that construct trees using a different con- 12 As the beam search procedure is time-consuming, we randomly sample 500 cases for each attractor and compute the number agreement accuracy on these samples.",
"13 Consequently, the correct and incorrect forms of the sentence might have different partial trees, as the highest scoring beam entries may be different for each alternative.",
"struction order than the top-down, left-to-right order used above.",
"These are a bottom-up construction order ( §4.1) and a left-corner construction order ( §4.2), analogous to the well-known parsing strategies (e.g.",
"Hale, 2014, chapter 3).",
"They differ from these classic strategies insofar as they do not announce the phrase-structural content of an entire branch at the same time, adopting instead a node-by-node enumeration reminescent of Markov Grammars (Charniak, 1997) .",
"This stepby-step arrangement extends to the derived string as well; since all variants generate words from left to right, the models can be compared using number agreement as a diagnostic.",
"14 Here we state our hypothesis on why the build order matters.",
"The three generation strategies represent different chain rule decompositions of the joint probability of strings and phrase-structure trees, thereby imposing different biases on the learner.",
"Earlier work in parsing has characterized the plausibility of top-down, left-corner, and bottom-up strategies as viable candidates of human sentence processing, especially in terms of memory constraints and human difficulties with center embedding constructions (Johnson-Laird, 1983; Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992 , inter alia), along with neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we cast the three strategies as models of language generation (Manning and Carpenter, 1997), and focus on the empirical question: which generation order has the most appropriate bias in modeling non-local structural dependencies in English?",
"These alternative orders organize the learning problem so as to yield intermediate states in generation that condition on different aspects of the grammatical structure.",
"In number agreement, this amounts to making an agreement controller, such as the word flowers in Fig.",
"3 , more or less salient.",
"If it is more salient, the model should be better-able to inflect the main verb in agreement with this controller, without getting distracted by the attractors.",
"The three proposed build orders are compared in Fig.",
"3 , showing the respective configurations (i.e.",
"the prefix) when generating the main verb in a sentence with a single attractor.",
"15 In ad-dition, we show concrete action sequences for a simpler sentence in each section.",
"Bottom-Up Traversal In bottom-up traversals, phrases are recursively constructed and labeled with the nonterminal type once all their daughters have been built, as illustrated in Fig.",
"4 .",
"Bottom-up traversals benefit from shorter stack depths compared to top-down due to the lack of incomplete nonterminals.",
"As the commitment to label the nonterminal type of a phrase is delayed until its constituents are complete, this means that the generation of a child node cannot condition on the label of its parent node.",
"In n-ary branching trees, bottom-up completion of constituents requires a procedure for determining how many of the most recent elements on the stack should be daughters of the node that is being constructed.",
"16 Conceptually, rather than having a single REDUCE operation as we have before, we have a complex REDUCE(X, n) operation that must determine the type of the constituent (i.e., X) as well as the number of daughters (i.e., n).",
"In step 5 of Fig.",
"4 , the newly formed NP constituent only covers the terminal worms, and neither the unattached terminal eats nor the constituent (NP The fox) is part of the new noun phrase.",
"We implement this extent decision using a stick-breaking construction-using the stack LSTM encoding, a single-layer feedforward network, and a logistic output layer-which decides whether the top element on the stack should be the leftmost child of the new constituent (i.e.",
"whether or not the new constituent is complete after popping the current topmost stack element), as illustrated in Fig.",
"5 .",
"If not, the process is then repeated after the topmost stack element is popped.",
"Once the extent of the new nonterminal has been decided, we parameterize the decision of the nonterminal label type; in Fig.",
"5 this is an NP.",
"A second difference to top-down generation is that when a single constituent remains on the stack, the sentence is not necessarily complete (see step 3 of Fig.",
"4 for examples where this happens).",
"We thus introduce an explicit STOP action (step 8, Fig.",
"4) , indicating the tree is complete, which is only assigned non-zero probability when the stack has a during the history of the full generation process vary considerably in the invariances and the kinds of actions they predict.",
"16 This mechanism is not necessary with strictly binary branching trees, since each new nonterminal always consists of the two children at the top of the stack.",
"Left-Corner Traversal Left-corner traversals combine some aspects of top-down and bottom-up processing.",
"As illustrated in Fig.",
"6 , this works by first generating the leftmost terminal of the tree, The (step 0), before proceeding bottom-up to predict its parent NP (step 1) and then top-down to predict the rest of its children (step 2).",
"A REDUCE action similarly calls the composition operator once the phrase is complete (e.g.",
"step 3).",
"The complete constituent (NP The fox) is the leftmost child of its parent node, thus an NT SW(S) action is done next (step 4).",
"The NT SW(X) action is similar to the NT(X) from the top-down generator, in that it introduces an open nonterminal node and must be matched later by a corresponding REDUCE operation, but, in addition, swaps the two topmost elements at the top of the stack.",
"This is necessary because the parent nonterminal node is not built until after its left-most child has been constructed.",
"In step 1 of Fig.",
"6 , with a single element The on the stack, the action NT SW(NP) adds the open nonterminal symbol NP to become the topmost stack element, but after applying the swap operator the stack now contains (NP | The (step 2).",
"Experiments We optimize the hyper-parameters of each RNNG variant using grid searches based on validation set perplexity.",
"Table 4 summarizes average stack depths and perplexities 17 on the Linzen et al.",
"(2016) validation set.",
"We evaluate each of the variants in terms of number agreement accuracy as an evidence of its suitability to model structural dependencies in English, presented in Table 5 .",
"To account for randomness in training, we report the error rate summary statistics of ten different runs.",
"Avg.",
"(±sdev)/min/max n=2 n=3 n=4 LM 5.8(±0.2)/5.5/6.0 9.6(±0.7)/8.8/10.1 14.1(±1.2)/13.0/15.3 TD 5.5(±0.4)/4.9/5.8 7.8(±0.6)/7.4/8.0 8.9(±1.1)/7.9/9.8 LC 5.4(±0.3)/5.2/5.5 8.2(±0.4)/7.9/8.7 9.9(±1.3)/8.8/11.5 BU 5.7(±0.3) 5.5/5.8 8.5(±0.7)/8.0/9.3 9.7(±1.1)/9.0/11.3 Table 5 : Number agreement error rates for topdown (TD), left-corner (LC), and bottom-up (BU) RNNGs, broken down by the number of attractors.",
"LM indicates the best sequential language model baseline ( §2).",
"We report the mean, standard deviation, and minimum/maximum of 10 different random seeds of each model.",
"Discussion.",
"In Table 5 , we focus on empirical results for cases where the structural dependencies matter the most, corresponding to cases with two, three, and four attractors.",
"All three RNNG variants outperform the sequential LSTM language model baseline for these cases.",
"Nevertheless, the top-down variant outperforms both left-corner and bottom-up strategies for difficult cases with three or more attractors, suggesting that the top-down strategy is most appropriately biased to model difficult number agreement dependencies in English.",
"We run an approximate randomization test by stratifying the output and permuting within each stratum (Yeh, 2000) and find that, for four attractors, the performance difference between the top-down RNNG and the other variants is statistically significant at p < 0.05.",
"The success of the top-down traversal in the domain of number-agreement prediction is consistent with a classical view in parsing that argues top-down parsing is the most human-like parsing strategy since it is the most anticipatory.",
"Only anticipatory representations, it is said, could explain the rapid, incremental processing that humans seem to exhibit (Marslen-Wilson, 1973; Tanenhaus et al., 1995) ; this line of thinking similarly motivates Charniak (2010) , among others.",
"While most work in this domain has been concerned with the parsing problem, our findings suggest that anticipatory mechanisms are also beneficial in capturing structural dependencies in language modeling.",
"We note that our results are achieved using models that, in theory, are able to condition on the entire derivation history, while earlier work in sentence processing has focused on cognitive memory considerations, such as the memory-bounded model of Schuler et al.",
"(2010) .",
"Conclusion Given enough capacity, LSTMs trained on language modeling objectives are able to learn syntax-sensitive dependencies, as evidenced by accurate number agreement accuracy with multiple attractors.",
"Despite this strong performance, we discover explicit modeling of structure does improve the model's ability to discover non-local structural dependencies when determining the distribution over subsequent word generation.",
"Recurrent neural network grammars (RNNGs), which jointly model phrase-structure trees and strings and employ an explicit composition operator, substantially outperform LSTM language models and syntactic language models without explicit compositions; this highlights the importance of a hierarchical inductive bias in capturing structural dependencies.",
"We explore the possibility that how the structure is built affects number agreement performance.",
"Through novel extensions to RNNGs that enable the use of left-corner and bottom-up generation strategies, we discover that this is indeed the case: the three RNNG variants have different generalization properties for number agreement, with the top-down traversal strategy performing best for cases with multiple attractors."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"4.3",
"5"
],
"paper_header_content": [
"Introduction",
"Number Agreement with LSTM Language Models",
"Number Agreement with RNNGs",
"Recurrent Neural Network Grammars",
"Experiments",
"Further Analysis",
"Top-Down, Left-Corner, and Bottom-Up Traversals",
"Bottom-Up Traversal",
"Left-Corner Traversal",
"Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-30#paper-1041#slide-2 | Number Agreement is Sensitive to Syntactic Structure | Number agreement reflects the dependency relation between subjects and verbs
Models that can capture headedness should do better at number agreement
LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | Number agreement reflects the dependency relation between subjects and verbs
Models that can capture headedness should do better at number agreement
LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | [] |
GEM-SciDuet-train-30#paper-1041#slide-3 | 1041 | LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modeling Structure Makes Them Better | Language exhibits hierarchical structure, but recent work using a subject-verb agreement diagnostic argued that state-ofthe-art language models, LSTMs, fail to learn long-range syntax-sensitive dependencies. Using the same diagnostic, we show that, in fact, LSTMs do succeed in learning such dependencies-provided they have enough capacity. We then explore whether models that have access to explicit syntactic information learn agreement more effectively, and how the way in which this structural information is incorporated into the model impacts performance. We find that the mere presence of syntactic information does not improve accuracy, but when model architecture is determined by syntax, number agreement is improved. Further, we find that the choice of how syntactic structure is built affects how well number agreement is learned: top-down construction outperforms leftcorner and bottom-up variants in capturing long-distance structural dependencies. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189
],
"paper_content_text": [
"Introduction Recurrent neural networks (RNNs) are remarkably effective models of sequential data.",
"Recent years have witnessed the widespread adoption of recurrent architectures such as LSTMs (Hochreiter and Schmidhuber, 1997) in various NLP tasks, with state of the art results in language modeling and conditional generation tasks like machine translation (Bahdanau et al., 2015) and text summarization (See et al., 2017) .",
"Here we revisit the question asked by Linzen et al.",
"(2016) : as RNNs model word sequences without explicit notions of hierarchical structure, Figure 1 : An example of the number agreement task with two attractors and a subject-verb distance of five.",
"to what extent are these models able to learn non-local syntactic dependencies in natural language?",
"Identifying number agreement between subjects and verbs-especially in the presence of attractors-can be understood as a cognitivelymotivated probe that seeks to distinguish hierarchical theories from sequential ones, as models that rely on sequential cues like the most recent noun would favor the incorrect verb form.",
"We provide an example of this task in Fig.",
"1 , where the plural form of the verb have agrees with the distant subject parts, rather than the adjacent attractors (underlined) of the singular form.",
"Contrary to the findings of Linzen et al.",
"(2016) , our experiments suggest that sequential LSTMs are able to capture structural dependencies to a large extent, even for cases with multiple attractors ( §2).",
"Our finding suggests that network capacity plays a crucial role in capturing structural dependencies with multiple attractors.",
"Nevertheless, we find that a strong character LSTM language model-which lacks explicit word representation and has to capture much longer sequential dependencies in order to learn non-local structural dependencies effectively-performs much worse in the number agreement task.",
"Given the strong performance of word-based LSTM language models, are there are any substantial benefits, in terms of number agreement accuracy, to explicitly modeling hierarchical structures as an inductive bias?",
"We discover that a certain class of LSTM language models that explicitly models syntactic structures, the recurrent neural network grammars (Dyer et al., 2016, RN-NGs) , considerably outperforms sequential LSTM language models for cases with multiple attractors ( §3).",
"We present experiments affirming that this gain is due to an explicit composition operator rather than the presence of predicted syntactic annotations.",
"Rather surprisingly, syntactic LSTM language models without explicit composition have no advantage over sequential LSTMs that operate on word sequences, although these models can nevertheless be excellent predictors of phrase structures (Choe and Charniak, 2016) .",
"Having established the importance of modeling structures, we explore the hypothesis that how we build the structure affects the model's ability to identify structural dependencies in English.",
"As RNNGs build phrase-structure trees through top-down operations, we propose extensions to the structure-building sequences and model architecture that enable left-corner (Henderson, 2003 (Henderson, , 2004 and bottom-up (Chelba and Jelinek, 2000; Emami and Jelinek, 2005) generation orders ( §4).",
"Extensive prior work has characterized topdown, left-corner, and bottom-up parsing strategies in terms of cognitive plausibility (Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992) and neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we move away from the realm of parsing and evaluate the three strategies as models of generation instead, and address the following empirical question: which generation order is most appropriately biased to model structural dependencies in English, as indicated by number agreement accuracy?",
"Our key finding is that the top-down generation outperforms left-corner and bottom-up variants for difficult cases with multiple attractors.",
"In theory, the three traversal strategies approximate the same chain rule that decompose the joint probability of words and phrase-structure trees, denoted as p(x, y), differently and as such will impose different biases on the learner.",
"In §4.3, we show that the three variants achieve similar perplexities on a held-out validation set.",
"As we observe different patterns in number agreement, this demonstrates that while perplexity can be a useful diagnostic tool, it may not be sensitive enough for comparing models in terms of how well they capture grammatical intuitions.",
"Number Agreement with LSTM Language Models We revisit the number agreement task with LSTMs trained on language modeling objectives, as proposed by Linzen et al.",
"(2016) .",
"Experimental Settings.",
"We use the same parsed Wikipedia corpus, verb inflectors, preprocessing steps, and dataset split as Linzen et al.",
"(2016) .",
"1 Word types beyond the most frequent 10,000 are converted to their respective POS tags.",
"We summarize the corpus statistics of the dataset, along with the test set distribution of the number of attractors, in Table 1 .",
"Similar to Linzen et al.",
"(2016) , we only include test cases where all intervening nouns are of the opposite number forms than the subject noun.",
"All models are implemented using the DyNet library (Neubig et al., 2017) .",
"Training was done using a language modeling objective that predicts the next word given the prefix; at test time we compute agreement error rates by comparing the probability of the correct verb form with the incorrect one.",
"We report performance of a few different LSTM hidden layer configurations, while other hyper-parameters are selected based on a grid search.",
"2 Following Linzen Table 2 indicates that, given enough capacity, LSTM language models without explicit syntactic supervision are able to perform well in number agreement.",
"For cases with multiple attractors, we observe that the LSTM language model with 50 hidden units trails behind its larger counterparts by a substantial margin despite comparable performance for zero attractor cases, suggesting that network capacity plays an especially important role in propagating relevant structural information across a large number of steps.",
"5 Our experiment independently derives the same finding as the recent work of Gulordava et al.",
"(2018) , who also find that LSTMs trained with language modeling objectives are able to learn number agreement well; here we additionally identify model capacity as one of the reasons for the discrepancy with the Linzen et al.",
"(2016) results.",
"While the pretrained large-scale language model of Jozefowicz et al.",
"(2016) has certain advantages in terms of model capacity, more training data, and richer vocabulary, we suspect that the poorer performance is due to differences between their training domain and the number agreement testing domain, although the model still performs reasonably well in the number agreement test set.",
"Prior work has confirmed the notion that, in many cases, statistical models are able to achieve good performance under some aggregate metric by overfitting to patterns that are predictive in most cases, often at the expense of more difficult, infrequent instances that require deeper language understanding abilities (Rimell et al., 2009; Jia and Liang, 2017) .",
"In the vast majority of cases, structural dependencies between subjects and verbs highly overlap with sequential dependencies (Table 1).",
"Nevertheless, the fact that number agreement accuracy gets worse as the number of attractors increases is consistent with a sequential recency bias in LSTMs: under this conjecture, identifying the correct structural dependency becomes harder when there are more adjacent nouns of different number forms than the true subject.",
"If the sequential recency conjecture is correct, then LSTMs would perform worse when the structural dependency is more distant in the sequences, compared to cases where the structural dependency is more adjacent.",
"We empirically test this conjecture by running a strong character-based LSTM language model of that achieved state of the art results on EnWiki8 from the Hutter Prize dataset (Hutter, 2012) , with 1,800 hidden units and 10 million parameters.",
"The character LSTM is trained, validated, and tested 6 on the same split of the Linzen et al.",
"(2016) number agreement dataset.",
"A priori, we expect that number agreement is harder for character LSTMs for two reasons.",
"First, character LSTMs lack explicit word representaforms much better for cases with multiple attractors.",
"tions, thus succeeding in this task requires identifying structural dependencies between two sequences of character tokens, while word-based LSTMs only need to resolve dependencies between word tokens.",
"Second, by nature of modeling characters, non-local structural dependencies are sequentially further apart than in the wordbased language model.",
"On the other hand, character LSTMs have the ability to exploit and share informative morphological cues, such as the fact that plural nouns in English tend to end with 's'.",
"As demonstrated on the last row of Table 2 , we find that the character LSTM language model performs much worse at number agreement with multiple attractors compared to its word-based counterparts.",
"This finding is consistent with that of Sennrich (2017) , who find that character-level decoders in neural machine translation perform worse than subword models in capturing morphosyntactic agreement.",
"To some extent, our finding demonstrates the limitations that character LSTMs face in learning structure from language modeling objectives, despite earlier evidence that character LSTM language models are able to implicitly acquire a lexicon (Le Godais et al., 2017) .",
"Number Agreement with RNNGs Given the strong performance of sequential LSTMs in number agreement, is there any further benefit to explicitly modeling hierarchical structures?",
"We focus on recurrent neural network grammars (Dyer et al., 2016, RNNGs) , which jointly model the probability of phrase-structure trees and strings, p(x, y), through structurebuilding actions and explicit compositions for representing completed constituents.",
"Our choice of RNNGs is motivated by the findings of Kuncoro et al.",
"(2017) , who find evidence for syntactic headedness in RNNG phrasal representations.",
"Intuitively, the ability to learn heads is beneficial for this task, as the representation for the noun phrase \"The flowers in the vase\" would be similar to the syntactic head flowers rather than vase.",
"In some sense, the composition operator can be understood as injecting a structural recency bias into the model design, as subjects and verbs that are sequentially apart are encouraged to be close together in the RNNGs' representation.",
"Recurrent Neural Network Grammars RNNGs (Dyer et al., 2016) are language models that estimate the joint probability of string terminals and phrase-structure tree nonterminals.",
"Here we use stack-only RNNGs that achieve better perplexity and parsing performance (Kuncoro et al., 2017) .",
"Given the current stack configuration, the objective function of RNNGs is to predict the correct structure-building operation according to a top-down, left-to-right traversal of the phrasestructure tree; a partial traversal for the input sentence \"The flowers in the vase are blooming\" is illustrated in Fig.",
"3(a) .",
"7 The structural inductive bias of RNNGs derives from an explicit composition operator that represents completed constituents; for instance, the constituent (NP The flowers) is represented by a single composite element on the stack, rather than as four separate symbols.",
"During each REDUCE action, the topmost stack elements that belong to the new constituent are popped from the stack and then composed by the composition function; the composed symbol is then pushed back into the stack.",
"The model is trained in an end-to-end manner by minimizing the cross-entropy loss relative to a sample of gold trees.",
"Experiments Here we summarize the experimental settings of running RNNGs on the number agreement dataset and discuss the empirical findings.",
"Experimental settings.",
"We obtain phrasestructure trees for the Linzen et al.",
"(2016) dataset using a publicly available discriminative model 8 trained on the Penn Treebank (Marcus et al., 1993) .",
"At training time, we use these predicted trees to derive action sequences on the training set, and train the RNNG model on these sequences.",
"9 At test time, we compare the probabilities of the correct and incorrect verb forms given the prefix, which now includes both nonterminal and terminal symbols.",
"An example of the stack contents (i.e.",
"the prefix) when predicting the verb is provided in Fig.",
"3(a) .",
"We similarly run a grid search over the same hyper-parameter range as the sequential LSTM and compare the results with the strongest sequential LSTM baseline from §2.",
"Figure 2 : Number agreement error rates for sequential LSTM language models (left), sequential syntactic LSTM language models (Choe and Charniak, 2016, center), and RNNGs (right).",
"Discussion.",
"Fig.",
"2 shows that RNNGs (rightmost) achieve much better number agreement accuracy compared to LSTM language models (leftmost) for difficult cases with four and five attractors, with around 30% error rate reductions, along with a 13% error rate reduction (from 9% to 7.8%) for three attractors.",
"We attribute the slightly worse performance of RNNGs on cases with zero and one attractor to the presence of intervening structure-building actions that separate the subject and the verb, such as a REDUCE (step 6 in Fig.",
"3(a) ) action to complete the noun phrase and at least one action to predict a verb phrase (step 15 in Fig.",
"3(a) ) before the verb itself is introduced, while LSTM language models benefit from shorter dependencies for zero and one attractor cases.",
"The performance gain of RNNGs might arise from two potential causes.",
"First, RNNGs have access to predicted syntactic annotations, while LSTM language models operate solely on word sequences.",
"Second, RNNGs incorporate explicit compositions, which encourage hierarhical representations and potentially the discovery of syntactic (rather than sequential) dependencies.",
"Would LSTMs that have access to syntactic annotations, but without the explicit composition function, benefit from the same performance gain as RNNGs?",
"To answer this question, we run sequential LSTMs over the same phrase-structure trees (Choe and Charniak, 2016) , similarly estimating the joint probability of phrase-structure nonterminals and string terminals but without an explicit composition operator.",
"Taking the example in Fig.",
"3(a) , the sequential syntactic LSTM would have fifteen 10 symbols on the LSTM when predicting the verb, as opposed to three symbols in the case of RNNGs' stack LSTM.",
"In theory, the sequential LSTM over the phrase-structure trees (Choe and Charniak, 2016) may be able to incorporate a similar, albeit implicit, composition process as RNNGs and consequently derive similarly syntactic heads, although there is no inductive bias that explicitly encourages such process.",
"Fig.",
"2 suggests that the sequential syntactic LSTMs (center) perform comparably with sequential LSTMs without syntax for multiple attractor cases, and worse than RNNGs for nearly all attractors; the gap is highest for multiple attractors.",
"This result showcases the importance of an explicit composition operator and hierarchical representations in identifying structural dependencies, as indicated by number agreement accuracy.",
"Our finding is consistent with the recent work of Yogatama et al.",
"(2018) , who find that introducing elements of hierarchical modeling through a stackstructured memory is beneficial for number agreement, outperforming LSTM language models and attention-augmented variants by increasing margins as the number of attractor grows.",
"Further Analysis In order to better interpret the results, we conduct further analysis into the perplexities of each model, followed by a discussion on the effect of incrementality constraints on the RNNG when predicting number agreement.",
"Perplexity.",
"To what extent does the success of RNNGs in the number agreement task with multiple attractors correlate with better performance under the perplexity metric?",
"We answer this question by using an importance sampling marginalization procedure (Dyer et al., 2016) to obtain an estimate of p(x) under both RNNGs and the sequential syntactic LSTM model.",
"Following Dyer et al.",
"(2016) , for each sentence on the validation set we sample 100 candidate trees from a discriminative model 11 as our proposal distribution.",
"As demonstrated in Table 3 , the LSTM language model has the lowest validation set perplexity despite substantially worse performance than RN-NGs in number agreement with multiple attractors, suggesting that lower perplexity is not neces-sarily correlated with number agreement success.",
"Incrementality constraints.",
"As the syntactic prefix was derived from a discriminative model that has access to unprocessed words, one potential concern is that this prefix might violate the incrementality constraints and benefit the RNNG over the LSTM language model.",
"To address this concern, we remark that the empirical evidence from Fig.",
"2 and Table 3 indicates that the LSTM language model without syntactic annotation outperforms the sequential LSTM with syntactic annotation in terms of both perplexity and number agreement throughout nearly all attractor settings, suggesting that the predicted syntactic prefix does not give any unfair advantage to the syntactic models.",
"Furthermore, we run an experiment where the syntactic prefix is instead derived from an incremental beam search procedure of Fried et al.",
"(2017) .",
"12 To this end, we take the highest scoring beam entry at the time that the verb is generated to be the syntactic prefix; this procedure is applied to both the correct and incorrect verb forms.",
"13 We then similarly compare the probabilities of the correct and incorrect verb form given each respective syntactic prefix to obtain number agreement accuracy.",
"Our finding suggests that using the fully incremental tree prefix leads to even better RNNG number agreement performance for four and five attractors, achieving 7.1% and 8.2% error rates, respectively, compared to 9.4% and 12% for the RNNG error rates in Fig.",
"2 .",
"Top-Down, Left-Corner, and Bottom-Up Traversals In this section, we propose two new variants of RNNGs that construct trees using a different con- 12 As the beam search procedure is time-consuming, we randomly sample 500 cases for each attractor and compute the number agreement accuracy on these samples.",
"13 Consequently, the correct and incorrect forms of the sentence might have different partial trees, as the highest scoring beam entries may be different for each alternative.",
"struction order than the top-down, left-to-right order used above.",
"These are a bottom-up construction order ( §4.1) and a left-corner construction order ( §4.2), analogous to the well-known parsing strategies (e.g.",
"Hale, 2014, chapter 3).",
"They differ from these classic strategies insofar as they do not announce the phrase-structural content of an entire branch at the same time, adopting instead a node-by-node enumeration reminescent of Markov Grammars (Charniak, 1997) .",
"This stepby-step arrangement extends to the derived string as well; since all variants generate words from left to right, the models can be compared using number agreement as a diagnostic.",
"14 Here we state our hypothesis on why the build order matters.",
"The three generation strategies represent different chain rule decompositions of the joint probability of strings and phrase-structure trees, thereby imposing different biases on the learner.",
"Earlier work in parsing has characterized the plausibility of top-down, left-corner, and bottom-up strategies as viable candidates of human sentence processing, especially in terms of memory constraints and human difficulties with center embedding constructions (Johnson-Laird, 1983; Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992 , inter alia), along with neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we cast the three strategies as models of language generation (Manning and Carpenter, 1997), and focus on the empirical question: which generation order has the most appropriate bias in modeling non-local structural dependencies in English?",
"These alternative orders organize the learning problem so as to yield intermediate states in generation that condition on different aspects of the grammatical structure.",
"In number agreement, this amounts to making an agreement controller, such as the word flowers in Fig.",
"3 , more or less salient.",
"If it is more salient, the model should be better-able to inflect the main verb in agreement with this controller, without getting distracted by the attractors.",
"The three proposed build orders are compared in Fig.",
"3 , showing the respective configurations (i.e.",
"the prefix) when generating the main verb in a sentence with a single attractor.",
"15 In ad-dition, we show concrete action sequences for a simpler sentence in each section.",
"Bottom-Up Traversal In bottom-up traversals, phrases are recursively constructed and labeled with the nonterminal type once all their daughters have been built, as illustrated in Fig.",
"4 .",
"Bottom-up traversals benefit from shorter stack depths compared to top-down due to the lack of incomplete nonterminals.",
"As the commitment to label the nonterminal type of a phrase is delayed until its constituents are complete, this means that the generation of a child node cannot condition on the label of its parent node.",
"In n-ary branching trees, bottom-up completion of constituents requires a procedure for determining how many of the most recent elements on the stack should be daughters of the node that is being constructed.",
"16 Conceptually, rather than having a single REDUCE operation as we have before, we have a complex REDUCE(X, n) operation that must determine the type of the constituent (i.e., X) as well as the number of daughters (i.e., n).",
"In step 5 of Fig.",
"4 , the newly formed NP constituent only covers the terminal worms, and neither the unattached terminal eats nor the constituent (NP The fox) is part of the new noun phrase.",
"We implement this extent decision using a stick-breaking construction-using the stack LSTM encoding, a single-layer feedforward network, and a logistic output layer-which decides whether the top element on the stack should be the leftmost child of the new constituent (i.e.",
"whether or not the new constituent is complete after popping the current topmost stack element), as illustrated in Fig.",
"5 .",
"If not, the process is then repeated after the topmost stack element is popped.",
"Once the extent of the new nonterminal has been decided, we parameterize the decision of the nonterminal label type; in Fig.",
"5 this is an NP.",
"A second difference to top-down generation is that when a single constituent remains on the stack, the sentence is not necessarily complete (see step 3 of Fig.",
"4 for examples where this happens).",
"We thus introduce an explicit STOP action (step 8, Fig.",
"4) , indicating the tree is complete, which is only assigned non-zero probability when the stack has a during the history of the full generation process vary considerably in the invariances and the kinds of actions they predict.",
"16 This mechanism is not necessary with strictly binary branching trees, since each new nonterminal always consists of the two children at the top of the stack.",
"Left-Corner Traversal Left-corner traversals combine some aspects of top-down and bottom-up processing.",
"As illustrated in Fig.",
"6 , this works by first generating the leftmost terminal of the tree, The (step 0), before proceeding bottom-up to predict its parent NP (step 1) and then top-down to predict the rest of its children (step 2).",
"A REDUCE action similarly calls the composition operator once the phrase is complete (e.g.",
"step 3).",
"The complete constituent (NP The fox) is the leftmost child of its parent node, thus an NT SW(S) action is done next (step 4).",
"The NT SW(X) action is similar to the NT(X) from the top-down generator, in that it introduces an open nonterminal node and must be matched later by a corresponding REDUCE operation, but, in addition, swaps the two topmost elements at the top of the stack.",
"This is necessary because the parent nonterminal node is not built until after its left-most child has been constructed.",
"In step 1 of Fig.",
"6 , with a single element The on the stack, the action NT SW(NP) adds the open nonterminal symbol NP to become the topmost stack element, but after applying the swap operator the stack now contains (NP | The (step 2).",
"Experiments We optimize the hyper-parameters of each RNNG variant using grid searches based on validation set perplexity.",
"Table 4 summarizes average stack depths and perplexities 17 on the Linzen et al.",
"(2016) validation set.",
"We evaluate each of the variants in terms of number agreement accuracy as an evidence of its suitability to model structural dependencies in English, presented in Table 5 .",
"To account for randomness in training, we report the error rate summary statistics of ten different runs.",
"Avg.",
"(±sdev)/min/max n=2 n=3 n=4 LM 5.8(±0.2)/5.5/6.0 9.6(±0.7)/8.8/10.1 14.1(±1.2)/13.0/15.3 TD 5.5(±0.4)/4.9/5.8 7.8(±0.6)/7.4/8.0 8.9(±1.1)/7.9/9.8 LC 5.4(±0.3)/5.2/5.5 8.2(±0.4)/7.9/8.7 9.9(±1.3)/8.8/11.5 BU 5.7(±0.3) 5.5/5.8 8.5(±0.7)/8.0/9.3 9.7(±1.1)/9.0/11.3 Table 5 : Number agreement error rates for topdown (TD), left-corner (LC), and bottom-up (BU) RNNGs, broken down by the number of attractors.",
"LM indicates the best sequential language model baseline ( §2).",
"We report the mean, standard deviation, and minimum/maximum of 10 different random seeds of each model.",
"Discussion.",
"In Table 5 , we focus on empirical results for cases where the structural dependencies matter the most, corresponding to cases with two, three, and four attractors.",
"All three RNNG variants outperform the sequential LSTM language model baseline for these cases.",
"Nevertheless, the top-down variant outperforms both left-corner and bottom-up strategies for difficult cases with three or more attractors, suggesting that the top-down strategy is most appropriately biased to model difficult number agreement dependencies in English.",
"We run an approximate randomization test by stratifying the output and permuting within each stratum (Yeh, 2000) and find that, for four attractors, the performance difference between the top-down RNNG and the other variants is statistically significant at p < 0.05.",
"The success of the top-down traversal in the domain of number-agreement prediction is consistent with a classical view in parsing that argues top-down parsing is the most human-like parsing strategy since it is the most anticipatory.",
"Only anticipatory representations, it is said, could explain the rapid, incremental processing that humans seem to exhibit (Marslen-Wilson, 1973; Tanenhaus et al., 1995) ; this line of thinking similarly motivates Charniak (2010) , among others.",
"While most work in this domain has been concerned with the parsing problem, our findings suggest that anticipatory mechanisms are also beneficial in capturing structural dependencies in language modeling.",
"We note that our results are achieved using models that, in theory, are able to condition on the entire derivation history, while earlier work in sentence processing has focused on cognitive memory considerations, such as the memory-bounded model of Schuler et al.",
"(2010) .",
"Conclusion Given enough capacity, LSTMs trained on language modeling objectives are able to learn syntax-sensitive dependencies, as evidenced by accurate number agreement accuracy with multiple attractors.",
"Despite this strong performance, we discover explicit modeling of structure does improve the model's ability to discover non-local structural dependencies when determining the distribution over subsequent word generation.",
"Recurrent neural network grammars (RNNGs), which jointly model phrase-structure trees and strings and employ an explicit composition operator, substantially outperform LSTM language models and syntactic language models without explicit compositions; this highlights the importance of a hierarchical inductive bias in capturing structural dependencies.",
"We explore the possibility that how the structure is built affects number agreement performance.",
"Through novel extensions to RNNGs that enable the use of left-corner and bottom-up generation strategies, we discover that this is indeed the case: the three RNNG variants have different generalization properties for number agreement, with the top-down traversal strategy performing best for cases with multiple attractors."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"4.3",
"5"
],
"paper_header_content": [
"Introduction",
"Number Agreement with LSTM Language Models",
"Number Agreement with RNNGs",
"Recurrent Neural Network Grammars",
"Experiments",
"Further Analysis",
"Top-Down, Left-Corner, and Bottom-Up Traversals",
"Bottom-Up Traversal",
"Left-Corner Traversal",
"Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-30#paper-1041#slide-3 | Number Agreement Dataset Overview | Number agreement dataset is derived from dependency-parsed
All intervening nouns must be of the same number n=2
LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018)
The vast majority of number agreement dependencies are sequential | Number agreement dataset is derived from dependency-parsed
All intervening nouns must be of the same number n=2
LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018)
The vast majority of number agreement dependencies are sequential | [] |
GEM-SciDuet-train-30#paper-1041#slide-4 | 1041 | LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modeling Structure Makes Them Better | Language exhibits hierarchical structure, but recent work using a subject-verb agreement diagnostic argued that state-ofthe-art language models, LSTMs, fail to learn long-range syntax-sensitive dependencies. Using the same diagnostic, we show that, in fact, LSTMs do succeed in learning such dependencies-provided they have enough capacity. We then explore whether models that have access to explicit syntactic information learn agreement more effectively, and how the way in which this structural information is incorporated into the model impacts performance. We find that the mere presence of syntactic information does not improve accuracy, but when model architecture is determined by syntax, number agreement is improved. Further, we find that the choice of how syntactic structure is built affects how well number agreement is learned: top-down construction outperforms leftcorner and bottom-up variants in capturing long-distance structural dependencies. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189
],
"paper_content_text": [
"Introduction Recurrent neural networks (RNNs) are remarkably effective models of sequential data.",
"Recent years have witnessed the widespread adoption of recurrent architectures such as LSTMs (Hochreiter and Schmidhuber, 1997) in various NLP tasks, with state of the art results in language modeling and conditional generation tasks like machine translation (Bahdanau et al., 2015) and text summarization (See et al., 2017) .",
"Here we revisit the question asked by Linzen et al.",
"(2016) : as RNNs model word sequences without explicit notions of hierarchical structure, Figure 1 : An example of the number agreement task with two attractors and a subject-verb distance of five.",
"to what extent are these models able to learn non-local syntactic dependencies in natural language?",
"Identifying number agreement between subjects and verbs-especially in the presence of attractors-can be understood as a cognitivelymotivated probe that seeks to distinguish hierarchical theories from sequential ones, as models that rely on sequential cues like the most recent noun would favor the incorrect verb form.",
"We provide an example of this task in Fig.",
"1 , where the plural form of the verb have agrees with the distant subject parts, rather than the adjacent attractors (underlined) of the singular form.",
"Contrary to the findings of Linzen et al.",
"(2016) , our experiments suggest that sequential LSTMs are able to capture structural dependencies to a large extent, even for cases with multiple attractors ( §2).",
"Our finding suggests that network capacity plays a crucial role in capturing structural dependencies with multiple attractors.",
"Nevertheless, we find that a strong character LSTM language model-which lacks explicit word representation and has to capture much longer sequential dependencies in order to learn non-local structural dependencies effectively-performs much worse in the number agreement task.",
"Given the strong performance of word-based LSTM language models, are there are any substantial benefits, in terms of number agreement accuracy, to explicitly modeling hierarchical structures as an inductive bias?",
"We discover that a certain class of LSTM language models that explicitly models syntactic structures, the recurrent neural network grammars (Dyer et al., 2016, RN-NGs) , considerably outperforms sequential LSTM language models for cases with multiple attractors ( §3).",
"We present experiments affirming that this gain is due to an explicit composition operator rather than the presence of predicted syntactic annotations.",
"Rather surprisingly, syntactic LSTM language models without explicit composition have no advantage over sequential LSTMs that operate on word sequences, although these models can nevertheless be excellent predictors of phrase structures (Choe and Charniak, 2016) .",
"Having established the importance of modeling structures, we explore the hypothesis that how we build the structure affects the model's ability to identify structural dependencies in English.",
"As RNNGs build phrase-structure trees through top-down operations, we propose extensions to the structure-building sequences and model architecture that enable left-corner (Henderson, 2003 (Henderson, , 2004 and bottom-up (Chelba and Jelinek, 2000; Emami and Jelinek, 2005) generation orders ( §4).",
"Extensive prior work has characterized topdown, left-corner, and bottom-up parsing strategies in terms of cognitive plausibility (Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992) and neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we move away from the realm of parsing and evaluate the three strategies as models of generation instead, and address the following empirical question: which generation order is most appropriately biased to model structural dependencies in English, as indicated by number agreement accuracy?",
"Our key finding is that the top-down generation outperforms left-corner and bottom-up variants for difficult cases with multiple attractors.",
"In theory, the three traversal strategies approximate the same chain rule that decompose the joint probability of words and phrase-structure trees, denoted as p(x, y), differently and as such will impose different biases on the learner.",
"In §4.3, we show that the three variants achieve similar perplexities on a held-out validation set.",
"As we observe different patterns in number agreement, this demonstrates that while perplexity can be a useful diagnostic tool, it may not be sensitive enough for comparing models in terms of how well they capture grammatical intuitions.",
"Number Agreement with LSTM Language Models We revisit the number agreement task with LSTMs trained on language modeling objectives, as proposed by Linzen et al.",
"(2016) .",
"Experimental Settings.",
"We use the same parsed Wikipedia corpus, verb inflectors, preprocessing steps, and dataset split as Linzen et al.",
"(2016) .",
"1 Word types beyond the most frequent 10,000 are converted to their respective POS tags.",
"We summarize the corpus statistics of the dataset, along with the test set distribution of the number of attractors, in Table 1 .",
"Similar to Linzen et al.",
"(2016) , we only include test cases where all intervening nouns are of the opposite number forms than the subject noun.",
"All models are implemented using the DyNet library (Neubig et al., 2017) .",
"Training was done using a language modeling objective that predicts the next word given the prefix; at test time we compute agreement error rates by comparing the probability of the correct verb form with the incorrect one.",
"We report performance of a few different LSTM hidden layer configurations, while other hyper-parameters are selected based on a grid search.",
"2 Following Linzen Table 2 indicates that, given enough capacity, LSTM language models without explicit syntactic supervision are able to perform well in number agreement.",
"For cases with multiple attractors, we observe that the LSTM language model with 50 hidden units trails behind its larger counterparts by a substantial margin despite comparable performance for zero attractor cases, suggesting that network capacity plays an especially important role in propagating relevant structural information across a large number of steps.",
"5 Our experiment independently derives the same finding as the recent work of Gulordava et al.",
"(2018) , who also find that LSTMs trained with language modeling objectives are able to learn number agreement well; here we additionally identify model capacity as one of the reasons for the discrepancy with the Linzen et al.",
"(2016) results.",
"While the pretrained large-scale language model of Jozefowicz et al.",
"(2016) has certain advantages in terms of model capacity, more training data, and richer vocabulary, we suspect that the poorer performance is due to differences between their training domain and the number agreement testing domain, although the model still performs reasonably well in the number agreement test set.",
"Prior work has confirmed the notion that, in many cases, statistical models are able to achieve good performance under some aggregate metric by overfitting to patterns that are predictive in most cases, often at the expense of more difficult, infrequent instances that require deeper language understanding abilities (Rimell et al., 2009; Jia and Liang, 2017) .",
"In the vast majority of cases, structural dependencies between subjects and verbs highly overlap with sequential dependencies (Table 1).",
"Nevertheless, the fact that number agreement accuracy gets worse as the number of attractors increases is consistent with a sequential recency bias in LSTMs: under this conjecture, identifying the correct structural dependency becomes harder when there are more adjacent nouns of different number forms than the true subject.",
"If the sequential recency conjecture is correct, then LSTMs would perform worse when the structural dependency is more distant in the sequences, compared to cases where the structural dependency is more adjacent.",
"We empirically test this conjecture by running a strong character-based LSTM language model of that achieved state of the art results on EnWiki8 from the Hutter Prize dataset (Hutter, 2012) , with 1,800 hidden units and 10 million parameters.",
"The character LSTM is trained, validated, and tested 6 on the same split of the Linzen et al.",
"(2016) number agreement dataset.",
"A priori, we expect that number agreement is harder for character LSTMs for two reasons.",
"First, character LSTMs lack explicit word representaforms much better for cases with multiple attractors.",
"tions, thus succeeding in this task requires identifying structural dependencies between two sequences of character tokens, while word-based LSTMs only need to resolve dependencies between word tokens.",
"Second, by nature of modeling characters, non-local structural dependencies are sequentially further apart than in the wordbased language model.",
"On the other hand, character LSTMs have the ability to exploit and share informative morphological cues, such as the fact that plural nouns in English tend to end with 's'.",
"As demonstrated on the last row of Table 2 , we find that the character LSTM language model performs much worse at number agreement with multiple attractors compared to its word-based counterparts.",
"This finding is consistent with that of Sennrich (2017) , who find that character-level decoders in neural machine translation perform worse than subword models in capturing morphosyntactic agreement.",
"To some extent, our finding demonstrates the limitations that character LSTMs face in learning structure from language modeling objectives, despite earlier evidence that character LSTM language models are able to implicitly acquire a lexicon (Le Godais et al., 2017) .",
"Number Agreement with RNNGs Given the strong performance of sequential LSTMs in number agreement, is there any further benefit to explicitly modeling hierarchical structures?",
"We focus on recurrent neural network grammars (Dyer et al., 2016, RNNGs) , which jointly model the probability of phrase-structure trees and strings, p(x, y), through structurebuilding actions and explicit compositions for representing completed constituents.",
"Our choice of RNNGs is motivated by the findings of Kuncoro et al.",
"(2017) , who find evidence for syntactic headedness in RNNG phrasal representations.",
"Intuitively, the ability to learn heads is beneficial for this task, as the representation for the noun phrase \"The flowers in the vase\" would be similar to the syntactic head flowers rather than vase.",
"In some sense, the composition operator can be understood as injecting a structural recency bias into the model design, as subjects and verbs that are sequentially apart are encouraged to be close together in the RNNGs' representation.",
"Recurrent Neural Network Grammars RNNGs (Dyer et al., 2016) are language models that estimate the joint probability of string terminals and phrase-structure tree nonterminals.",
"Here we use stack-only RNNGs that achieve better perplexity and parsing performance (Kuncoro et al., 2017) .",
"Given the current stack configuration, the objective function of RNNGs is to predict the correct structure-building operation according to a top-down, left-to-right traversal of the phrasestructure tree; a partial traversal for the input sentence \"The flowers in the vase are blooming\" is illustrated in Fig.",
"3(a) .",
"7 The structural inductive bias of RNNGs derives from an explicit composition operator that represents completed constituents; for instance, the constituent (NP The flowers) is represented by a single composite element on the stack, rather than as four separate symbols.",
"During each REDUCE action, the topmost stack elements that belong to the new constituent are popped from the stack and then composed by the composition function; the composed symbol is then pushed back into the stack.",
"The model is trained in an end-to-end manner by minimizing the cross-entropy loss relative to a sample of gold trees.",
"Experiments Here we summarize the experimental settings of running RNNGs on the number agreement dataset and discuss the empirical findings.",
"Experimental settings.",
"We obtain phrasestructure trees for the Linzen et al.",
"(2016) dataset using a publicly available discriminative model 8 trained on the Penn Treebank (Marcus et al., 1993) .",
"At training time, we use these predicted trees to derive action sequences on the training set, and train the RNNG model on these sequences.",
"9 At test time, we compare the probabilities of the correct and incorrect verb forms given the prefix, which now includes both nonterminal and terminal symbols.",
"An example of the stack contents (i.e.",
"the prefix) when predicting the verb is provided in Fig.",
"3(a) .",
"We similarly run a grid search over the same hyper-parameter range as the sequential LSTM and compare the results with the strongest sequential LSTM baseline from §2.",
"Figure 2 : Number agreement error rates for sequential LSTM language models (left), sequential syntactic LSTM language models (Choe and Charniak, 2016, center), and RNNGs (right).",
"Discussion.",
"Fig.",
"2 shows that RNNGs (rightmost) achieve much better number agreement accuracy compared to LSTM language models (leftmost) for difficult cases with four and five attractors, with around 30% error rate reductions, along with a 13% error rate reduction (from 9% to 7.8%) for three attractors.",
"We attribute the slightly worse performance of RNNGs on cases with zero and one attractor to the presence of intervening structure-building actions that separate the subject and the verb, such as a REDUCE (step 6 in Fig.",
"3(a) ) action to complete the noun phrase and at least one action to predict a verb phrase (step 15 in Fig.",
"3(a) ) before the verb itself is introduced, while LSTM language models benefit from shorter dependencies for zero and one attractor cases.",
"The performance gain of RNNGs might arise from two potential causes.",
"First, RNNGs have access to predicted syntactic annotations, while LSTM language models operate solely on word sequences.",
"Second, RNNGs incorporate explicit compositions, which encourage hierarhical representations and potentially the discovery of syntactic (rather than sequential) dependencies.",
"Would LSTMs that have access to syntactic annotations, but without the explicit composition function, benefit from the same performance gain as RNNGs?",
"To answer this question, we run sequential LSTMs over the same phrase-structure trees (Choe and Charniak, 2016) , similarly estimating the joint probability of phrase-structure nonterminals and string terminals but without an explicit composition operator.",
"Taking the example in Fig.",
"3(a) , the sequential syntactic LSTM would have fifteen 10 symbols on the LSTM when predicting the verb, as opposed to three symbols in the case of RNNGs' stack LSTM.",
"In theory, the sequential LSTM over the phrase-structure trees (Choe and Charniak, 2016) may be able to incorporate a similar, albeit implicit, composition process as RNNGs and consequently derive similarly syntactic heads, although there is no inductive bias that explicitly encourages such process.",
"Fig.",
"2 suggests that the sequential syntactic LSTMs (center) perform comparably with sequential LSTMs without syntax for multiple attractor cases, and worse than RNNGs for nearly all attractors; the gap is highest for multiple attractors.",
"This result showcases the importance of an explicit composition operator and hierarchical representations in identifying structural dependencies, as indicated by number agreement accuracy.",
"Our finding is consistent with the recent work of Yogatama et al.",
"(2018) , who find that introducing elements of hierarchical modeling through a stackstructured memory is beneficial for number agreement, outperforming LSTM language models and attention-augmented variants by increasing margins as the number of attractor grows.",
"Further Analysis In order to better interpret the results, we conduct further analysis into the perplexities of each model, followed by a discussion on the effect of incrementality constraints on the RNNG when predicting number agreement.",
"Perplexity.",
"To what extent does the success of RNNGs in the number agreement task with multiple attractors correlate with better performance under the perplexity metric?",
"We answer this question by using an importance sampling marginalization procedure (Dyer et al., 2016) to obtain an estimate of p(x) under both RNNGs and the sequential syntactic LSTM model.",
"Following Dyer et al.",
"(2016) , for each sentence on the validation set we sample 100 candidate trees from a discriminative model 11 as our proposal distribution.",
"As demonstrated in Table 3 , the LSTM language model has the lowest validation set perplexity despite substantially worse performance than RN-NGs in number agreement with multiple attractors, suggesting that lower perplexity is not neces-sarily correlated with number agreement success.",
"Incrementality constraints.",
"As the syntactic prefix was derived from a discriminative model that has access to unprocessed words, one potential concern is that this prefix might violate the incrementality constraints and benefit the RNNG over the LSTM language model.",
"To address this concern, we remark that the empirical evidence from Fig.",
"2 and Table 3 indicates that the LSTM language model without syntactic annotation outperforms the sequential LSTM with syntactic annotation in terms of both perplexity and number agreement throughout nearly all attractor settings, suggesting that the predicted syntactic prefix does not give any unfair advantage to the syntactic models.",
"Furthermore, we run an experiment where the syntactic prefix is instead derived from an incremental beam search procedure of Fried et al.",
"(2017) .",
"12 To this end, we take the highest scoring beam entry at the time that the verb is generated to be the syntactic prefix; this procedure is applied to both the correct and incorrect verb forms.",
"13 We then similarly compare the probabilities of the correct and incorrect verb form given each respective syntactic prefix to obtain number agreement accuracy.",
"Our finding suggests that using the fully incremental tree prefix leads to even better RNNG number agreement performance for four and five attractors, achieving 7.1% and 8.2% error rates, respectively, compared to 9.4% and 12% for the RNNG error rates in Fig.",
"2 .",
"Top-Down, Left-Corner, and Bottom-Up Traversals In this section, we propose two new variants of RNNGs that construct trees using a different con- 12 As the beam search procedure is time-consuming, we randomly sample 500 cases for each attractor and compute the number agreement accuracy on these samples.",
"13 Consequently, the correct and incorrect forms of the sentence might have different partial trees, as the highest scoring beam entries may be different for each alternative.",
"struction order than the top-down, left-to-right order used above.",
"These are a bottom-up construction order ( §4.1) and a left-corner construction order ( §4.2), analogous to the well-known parsing strategies (e.g.",
"Hale, 2014, chapter 3).",
"They differ from these classic strategies insofar as they do not announce the phrase-structural content of an entire branch at the same time, adopting instead a node-by-node enumeration reminescent of Markov Grammars (Charniak, 1997) .",
"This stepby-step arrangement extends to the derived string as well; since all variants generate words from left to right, the models can be compared using number agreement as a diagnostic.",
"14 Here we state our hypothesis on why the build order matters.",
"The three generation strategies represent different chain rule decompositions of the joint probability of strings and phrase-structure trees, thereby imposing different biases on the learner.",
"Earlier work in parsing has characterized the plausibility of top-down, left-corner, and bottom-up strategies as viable candidates of human sentence processing, especially in terms of memory constraints and human difficulties with center embedding constructions (Johnson-Laird, 1983; Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992 , inter alia), along with neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we cast the three strategies as models of language generation (Manning and Carpenter, 1997), and focus on the empirical question: which generation order has the most appropriate bias in modeling non-local structural dependencies in English?",
"These alternative orders organize the learning problem so as to yield intermediate states in generation that condition on different aspects of the grammatical structure.",
"In number agreement, this amounts to making an agreement controller, such as the word flowers in Fig.",
"3 , more or less salient.",
"If it is more salient, the model should be better-able to inflect the main verb in agreement with this controller, without getting distracted by the attractors.",
"The three proposed build orders are compared in Fig.",
"3 , showing the respective configurations (i.e.",
"the prefix) when generating the main verb in a sentence with a single attractor.",
"15 In ad-dition, we show concrete action sequences for a simpler sentence in each section.",
"Bottom-Up Traversal In bottom-up traversals, phrases are recursively constructed and labeled with the nonterminal type once all their daughters have been built, as illustrated in Fig.",
"4 .",
"Bottom-up traversals benefit from shorter stack depths compared to top-down due to the lack of incomplete nonterminals.",
"As the commitment to label the nonterminal type of a phrase is delayed until its constituents are complete, this means that the generation of a child node cannot condition on the label of its parent node.",
"In n-ary branching trees, bottom-up completion of constituents requires a procedure for determining how many of the most recent elements on the stack should be daughters of the node that is being constructed.",
"16 Conceptually, rather than having a single REDUCE operation as we have before, we have a complex REDUCE(X, n) operation that must determine the type of the constituent (i.e., X) as well as the number of daughters (i.e., n).",
"In step 5 of Fig.",
"4 , the newly formed NP constituent only covers the terminal worms, and neither the unattached terminal eats nor the constituent (NP The fox) is part of the new noun phrase.",
"We implement this extent decision using a stick-breaking construction-using the stack LSTM encoding, a single-layer feedforward network, and a logistic output layer-which decides whether the top element on the stack should be the leftmost child of the new constituent (i.e.",
"whether or not the new constituent is complete after popping the current topmost stack element), as illustrated in Fig.",
"5 .",
"If not, the process is then repeated after the topmost stack element is popped.",
"Once the extent of the new nonterminal has been decided, we parameterize the decision of the nonterminal label type; in Fig.",
"5 this is an NP.",
"A second difference to top-down generation is that when a single constituent remains on the stack, the sentence is not necessarily complete (see step 3 of Fig.",
"4 for examples where this happens).",
"We thus introduce an explicit STOP action (step 8, Fig.",
"4) , indicating the tree is complete, which is only assigned non-zero probability when the stack has a during the history of the full generation process vary considerably in the invariances and the kinds of actions they predict.",
"16 This mechanism is not necessary with strictly binary branching trees, since each new nonterminal always consists of the two children at the top of the stack.",
"Left-Corner Traversal Left-corner traversals combine some aspects of top-down and bottom-up processing.",
"As illustrated in Fig.",
"6 , this works by first generating the leftmost terminal of the tree, The (step 0), before proceeding bottom-up to predict its parent NP (step 1) and then top-down to predict the rest of its children (step 2).",
"A REDUCE action similarly calls the composition operator once the phrase is complete (e.g.",
"step 3).",
"The complete constituent (NP The fox) is the leftmost child of its parent node, thus an NT SW(S) action is done next (step 4).",
"The NT SW(X) action is similar to the NT(X) from the top-down generator, in that it introduces an open nonterminal node and must be matched later by a corresponding REDUCE operation, but, in addition, swaps the two topmost elements at the top of the stack.",
"This is necessary because the parent nonterminal node is not built until after its left-most child has been constructed.",
"In step 1 of Fig.",
"6 , with a single element The on the stack, the action NT SW(NP) adds the open nonterminal symbol NP to become the topmost stack element, but after applying the swap operator the stack now contains (NP | The (step 2).",
"Experiments We optimize the hyper-parameters of each RNNG variant using grid searches based on validation set perplexity.",
"Table 4 summarizes average stack depths and perplexities 17 on the Linzen et al.",
"(2016) validation set.",
"We evaluate each of the variants in terms of number agreement accuracy as an evidence of its suitability to model structural dependencies in English, presented in Table 5 .",
"To account for randomness in training, we report the error rate summary statistics of ten different runs.",
"Avg.",
"(±sdev)/min/max n=2 n=3 n=4 LM 5.8(±0.2)/5.5/6.0 9.6(±0.7)/8.8/10.1 14.1(±1.2)/13.0/15.3 TD 5.5(±0.4)/4.9/5.8 7.8(±0.6)/7.4/8.0 8.9(±1.1)/7.9/9.8 LC 5.4(±0.3)/5.2/5.5 8.2(±0.4)/7.9/8.7 9.9(±1.3)/8.8/11.5 BU 5.7(±0.3) 5.5/5.8 8.5(±0.7)/8.0/9.3 9.7(±1.1)/9.0/11.3 Table 5 : Number agreement error rates for topdown (TD), left-corner (LC), and bottom-up (BU) RNNGs, broken down by the number of attractors.",
"LM indicates the best sequential language model baseline ( §2).",
"We report the mean, standard deviation, and minimum/maximum of 10 different random seeds of each model.",
"Discussion.",
"In Table 5 , we focus on empirical results for cases where the structural dependencies matter the most, corresponding to cases with two, three, and four attractors.",
"All three RNNG variants outperform the sequential LSTM language model baseline for these cases.",
"Nevertheless, the top-down variant outperforms both left-corner and bottom-up strategies for difficult cases with three or more attractors, suggesting that the top-down strategy is most appropriately biased to model difficult number agreement dependencies in English.",
"We run an approximate randomization test by stratifying the output and permuting within each stratum (Yeh, 2000) and find that, for four attractors, the performance difference between the top-down RNNG and the other variants is statistically significant at p < 0.05.",
"The success of the top-down traversal in the domain of number-agreement prediction is consistent with a classical view in parsing that argues top-down parsing is the most human-like parsing strategy since it is the most anticipatory.",
"Only anticipatory representations, it is said, could explain the rapid, incremental processing that humans seem to exhibit (Marslen-Wilson, 1973; Tanenhaus et al., 1995) ; this line of thinking similarly motivates Charniak (2010) , among others.",
"While most work in this domain has been concerned with the parsing problem, our findings suggest that anticipatory mechanisms are also beneficial in capturing structural dependencies in language modeling.",
"We note that our results are achieved using models that, in theory, are able to condition on the entire derivation history, while earlier work in sentence processing has focused on cognitive memory considerations, such as the memory-bounded model of Schuler et al.",
"(2010) .",
"Conclusion Given enough capacity, LSTMs trained on language modeling objectives are able to learn syntax-sensitive dependencies, as evidenced by accurate number agreement accuracy with multiple attractors.",
"Despite this strong performance, we discover explicit modeling of structure does improve the model's ability to discover non-local structural dependencies when determining the distribution over subsequent word generation.",
"Recurrent neural network grammars (RNNGs), which jointly model phrase-structure trees and strings and employ an explicit composition operator, substantially outperform LSTM language models and syntactic language models without explicit compositions; this highlights the importance of a hierarchical inductive bias in capturing structural dependencies.",
"We explore the possibility that how the structure is built affects number agreement performance.",
"Through novel extensions to RNNGs that enable the use of left-corner and bottom-up generation strategies, we discover that this is indeed the case: the three RNNG variants have different generalization properties for number agreement, with the top-down traversal strategy performing best for cases with multiple attractors."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"4.3",
"5"
],
"paper_header_content": [
"Introduction",
"Number Agreement with LSTM Language Models",
"Number Agreement with RNNGs",
"Recurrent Neural Network Grammars",
"Experiments",
"Further Analysis",
"Top-Down, Left-Corner, and Bottom-Up Traversals",
"Bottom-Up Traversal",
"Left-Corner Traversal",
"Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-30#paper-1041#slide-4 | First Part Can LSTMs Learn Number Agreement Well | The model is trained with language modelling objectives
Revisit the same question as Linzen et al. (2016):
To what extent are LSTMs able to learn non-local syntax-sensitive dependencies in natural language?
LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | The model is trained with language modelling objectives
Revisit the same question as Linzen et al. (2016):
To what extent are LSTMs able to learn non-local syntax-sensitive dependencies in natural language?
LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | [] |
GEM-SciDuet-train-30#paper-1041#slide-5 | 1041 | LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modeling Structure Makes Them Better | Language exhibits hierarchical structure, but recent work using a subject-verb agreement diagnostic argued that state-ofthe-art language models, LSTMs, fail to learn long-range syntax-sensitive dependencies. Using the same diagnostic, we show that, in fact, LSTMs do succeed in learning such dependencies-provided they have enough capacity. We then explore whether models that have access to explicit syntactic information learn agreement more effectively, and how the way in which this structural information is incorporated into the model impacts performance. We find that the mere presence of syntactic information does not improve accuracy, but when model architecture is determined by syntax, number agreement is improved. Further, we find that the choice of how syntactic structure is built affects how well number agreement is learned: top-down construction outperforms leftcorner and bottom-up variants in capturing long-distance structural dependencies. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189
],
"paper_content_text": [
"Introduction Recurrent neural networks (RNNs) are remarkably effective models of sequential data.",
"Recent years have witnessed the widespread adoption of recurrent architectures such as LSTMs (Hochreiter and Schmidhuber, 1997) in various NLP tasks, with state of the art results in language modeling and conditional generation tasks like machine translation (Bahdanau et al., 2015) and text summarization (See et al., 2017) .",
"Here we revisit the question asked by Linzen et al.",
"(2016) : as RNNs model word sequences without explicit notions of hierarchical structure, Figure 1 : An example of the number agreement task with two attractors and a subject-verb distance of five.",
"to what extent are these models able to learn non-local syntactic dependencies in natural language?",
"Identifying number agreement between subjects and verbs-especially in the presence of attractors-can be understood as a cognitivelymotivated probe that seeks to distinguish hierarchical theories from sequential ones, as models that rely on sequential cues like the most recent noun would favor the incorrect verb form.",
"We provide an example of this task in Fig.",
"1 , where the plural form of the verb have agrees with the distant subject parts, rather than the adjacent attractors (underlined) of the singular form.",
"Contrary to the findings of Linzen et al.",
"(2016) , our experiments suggest that sequential LSTMs are able to capture structural dependencies to a large extent, even for cases with multiple attractors ( §2).",
"Our finding suggests that network capacity plays a crucial role in capturing structural dependencies with multiple attractors.",
"Nevertheless, we find that a strong character LSTM language model-which lacks explicit word representation and has to capture much longer sequential dependencies in order to learn non-local structural dependencies effectively-performs much worse in the number agreement task.",
"Given the strong performance of word-based LSTM language models, are there are any substantial benefits, in terms of number agreement accuracy, to explicitly modeling hierarchical structures as an inductive bias?",
"We discover that a certain class of LSTM language models that explicitly models syntactic structures, the recurrent neural network grammars (Dyer et al., 2016, RN-NGs) , considerably outperforms sequential LSTM language models for cases with multiple attractors ( §3).",
"We present experiments affirming that this gain is due to an explicit composition operator rather than the presence of predicted syntactic annotations.",
"Rather surprisingly, syntactic LSTM language models without explicit composition have no advantage over sequential LSTMs that operate on word sequences, although these models can nevertheless be excellent predictors of phrase structures (Choe and Charniak, 2016) .",
"Having established the importance of modeling structures, we explore the hypothesis that how we build the structure affects the model's ability to identify structural dependencies in English.",
"As RNNGs build phrase-structure trees through top-down operations, we propose extensions to the structure-building sequences and model architecture that enable left-corner (Henderson, 2003 (Henderson, , 2004 and bottom-up (Chelba and Jelinek, 2000; Emami and Jelinek, 2005) generation orders ( §4).",
"Extensive prior work has characterized topdown, left-corner, and bottom-up parsing strategies in terms of cognitive plausibility (Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992) and neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we move away from the realm of parsing and evaluate the three strategies as models of generation instead, and address the following empirical question: which generation order is most appropriately biased to model structural dependencies in English, as indicated by number agreement accuracy?",
"Our key finding is that the top-down generation outperforms left-corner and bottom-up variants for difficult cases with multiple attractors.",
"In theory, the three traversal strategies approximate the same chain rule that decompose the joint probability of words and phrase-structure trees, denoted as p(x, y), differently and as such will impose different biases on the learner.",
"In §4.3, we show that the three variants achieve similar perplexities on a held-out validation set.",
"As we observe different patterns in number agreement, this demonstrates that while perplexity can be a useful diagnostic tool, it may not be sensitive enough for comparing models in terms of how well they capture grammatical intuitions.",
"Number Agreement with LSTM Language Models We revisit the number agreement task with LSTMs trained on language modeling objectives, as proposed by Linzen et al.",
"(2016) .",
"Experimental Settings.",
"We use the same parsed Wikipedia corpus, verb inflectors, preprocessing steps, and dataset split as Linzen et al.",
"(2016) .",
"1 Word types beyond the most frequent 10,000 are converted to their respective POS tags.",
"We summarize the corpus statistics of the dataset, along with the test set distribution of the number of attractors, in Table 1 .",
"Similar to Linzen et al.",
"(2016) , we only include test cases where all intervening nouns are of the opposite number forms than the subject noun.",
"All models are implemented using the DyNet library (Neubig et al., 2017) .",
"Training was done using a language modeling objective that predicts the next word given the prefix; at test time we compute agreement error rates by comparing the probability of the correct verb form with the incorrect one.",
"We report performance of a few different LSTM hidden layer configurations, while other hyper-parameters are selected based on a grid search.",
"2 Following Linzen Table 2 indicates that, given enough capacity, LSTM language models without explicit syntactic supervision are able to perform well in number agreement.",
"For cases with multiple attractors, we observe that the LSTM language model with 50 hidden units trails behind its larger counterparts by a substantial margin despite comparable performance for zero attractor cases, suggesting that network capacity plays an especially important role in propagating relevant structural information across a large number of steps.",
"5 Our experiment independently derives the same finding as the recent work of Gulordava et al.",
"(2018) , who also find that LSTMs trained with language modeling objectives are able to learn number agreement well; here we additionally identify model capacity as one of the reasons for the discrepancy with the Linzen et al.",
"(2016) results.",
"While the pretrained large-scale language model of Jozefowicz et al.",
"(2016) has certain advantages in terms of model capacity, more training data, and richer vocabulary, we suspect that the poorer performance is due to differences between their training domain and the number agreement testing domain, although the model still performs reasonably well in the number agreement test set.",
"Prior work has confirmed the notion that, in many cases, statistical models are able to achieve good performance under some aggregate metric by overfitting to patterns that are predictive in most cases, often at the expense of more difficult, infrequent instances that require deeper language understanding abilities (Rimell et al., 2009; Jia and Liang, 2017) .",
"In the vast majority of cases, structural dependencies between subjects and verbs highly overlap with sequential dependencies (Table 1).",
"Nevertheless, the fact that number agreement accuracy gets worse as the number of attractors increases is consistent with a sequential recency bias in LSTMs: under this conjecture, identifying the correct structural dependency becomes harder when there are more adjacent nouns of different number forms than the true subject.",
"If the sequential recency conjecture is correct, then LSTMs would perform worse when the structural dependency is more distant in the sequences, compared to cases where the structural dependency is more adjacent.",
"We empirically test this conjecture by running a strong character-based LSTM language model of that achieved state of the art results on EnWiki8 from the Hutter Prize dataset (Hutter, 2012) , with 1,800 hidden units and 10 million parameters.",
"The character LSTM is trained, validated, and tested 6 on the same split of the Linzen et al.",
"(2016) number agreement dataset.",
"A priori, we expect that number agreement is harder for character LSTMs for two reasons.",
"First, character LSTMs lack explicit word representaforms much better for cases with multiple attractors.",
"tions, thus succeeding in this task requires identifying structural dependencies between two sequences of character tokens, while word-based LSTMs only need to resolve dependencies between word tokens.",
"Second, by nature of modeling characters, non-local structural dependencies are sequentially further apart than in the wordbased language model.",
"On the other hand, character LSTMs have the ability to exploit and share informative morphological cues, such as the fact that plural nouns in English tend to end with 's'.",
"As demonstrated on the last row of Table 2 , we find that the character LSTM language model performs much worse at number agreement with multiple attractors compared to its word-based counterparts.",
"This finding is consistent with that of Sennrich (2017) , who find that character-level decoders in neural machine translation perform worse than subword models in capturing morphosyntactic agreement.",
"To some extent, our finding demonstrates the limitations that character LSTMs face in learning structure from language modeling objectives, despite earlier evidence that character LSTM language models are able to implicitly acquire a lexicon (Le Godais et al., 2017) .",
"Number Agreement with RNNGs Given the strong performance of sequential LSTMs in number agreement, is there any further benefit to explicitly modeling hierarchical structures?",
"We focus on recurrent neural network grammars (Dyer et al., 2016, RNNGs) , which jointly model the probability of phrase-structure trees and strings, p(x, y), through structurebuilding actions and explicit compositions for representing completed constituents.",
"Our choice of RNNGs is motivated by the findings of Kuncoro et al.",
"(2017) , who find evidence for syntactic headedness in RNNG phrasal representations.",
"Intuitively, the ability to learn heads is beneficial for this task, as the representation for the noun phrase \"The flowers in the vase\" would be similar to the syntactic head flowers rather than vase.",
"In some sense, the composition operator can be understood as injecting a structural recency bias into the model design, as subjects and verbs that are sequentially apart are encouraged to be close together in the RNNGs' representation.",
"Recurrent Neural Network Grammars RNNGs (Dyer et al., 2016) are language models that estimate the joint probability of string terminals and phrase-structure tree nonterminals.",
"Here we use stack-only RNNGs that achieve better perplexity and parsing performance (Kuncoro et al., 2017) .",
"Given the current stack configuration, the objective function of RNNGs is to predict the correct structure-building operation according to a top-down, left-to-right traversal of the phrasestructure tree; a partial traversal for the input sentence \"The flowers in the vase are blooming\" is illustrated in Fig.",
"3(a) .",
"7 The structural inductive bias of RNNGs derives from an explicit composition operator that represents completed constituents; for instance, the constituent (NP The flowers) is represented by a single composite element on the stack, rather than as four separate symbols.",
"During each REDUCE action, the topmost stack elements that belong to the new constituent are popped from the stack and then composed by the composition function; the composed symbol is then pushed back into the stack.",
"The model is trained in an end-to-end manner by minimizing the cross-entropy loss relative to a sample of gold trees.",
"Experiments Here we summarize the experimental settings of running RNNGs on the number agreement dataset and discuss the empirical findings.",
"Experimental settings.",
"We obtain phrasestructure trees for the Linzen et al.",
"(2016) dataset using a publicly available discriminative model 8 trained on the Penn Treebank (Marcus et al., 1993) .",
"At training time, we use these predicted trees to derive action sequences on the training set, and train the RNNG model on these sequences.",
"9 At test time, we compare the probabilities of the correct and incorrect verb forms given the prefix, which now includes both nonterminal and terminal symbols.",
"An example of the stack contents (i.e.",
"the prefix) when predicting the verb is provided in Fig.",
"3(a) .",
"We similarly run a grid search over the same hyper-parameter range as the sequential LSTM and compare the results with the strongest sequential LSTM baseline from §2.",
"Figure 2 : Number agreement error rates for sequential LSTM language models (left), sequential syntactic LSTM language models (Choe and Charniak, 2016, center), and RNNGs (right).",
"Discussion.",
"Fig.",
"2 shows that RNNGs (rightmost) achieve much better number agreement accuracy compared to LSTM language models (leftmost) for difficult cases with four and five attractors, with around 30% error rate reductions, along with a 13% error rate reduction (from 9% to 7.8%) for three attractors.",
"We attribute the slightly worse performance of RNNGs on cases with zero and one attractor to the presence of intervening structure-building actions that separate the subject and the verb, such as a REDUCE (step 6 in Fig.",
"3(a) ) action to complete the noun phrase and at least one action to predict a verb phrase (step 15 in Fig.",
"3(a) ) before the verb itself is introduced, while LSTM language models benefit from shorter dependencies for zero and one attractor cases.",
"The performance gain of RNNGs might arise from two potential causes.",
"First, RNNGs have access to predicted syntactic annotations, while LSTM language models operate solely on word sequences.",
"Second, RNNGs incorporate explicit compositions, which encourage hierarhical representations and potentially the discovery of syntactic (rather than sequential) dependencies.",
"Would LSTMs that have access to syntactic annotations, but without the explicit composition function, benefit from the same performance gain as RNNGs?",
"To answer this question, we run sequential LSTMs over the same phrase-structure trees (Choe and Charniak, 2016) , similarly estimating the joint probability of phrase-structure nonterminals and string terminals but without an explicit composition operator.",
"Taking the example in Fig.",
"3(a) , the sequential syntactic LSTM would have fifteen 10 symbols on the LSTM when predicting the verb, as opposed to three symbols in the case of RNNGs' stack LSTM.",
"In theory, the sequential LSTM over the phrase-structure trees (Choe and Charniak, 2016) may be able to incorporate a similar, albeit implicit, composition process as RNNGs and consequently derive similarly syntactic heads, although there is no inductive bias that explicitly encourages such process.",
"Fig.",
"2 suggests that the sequential syntactic LSTMs (center) perform comparably with sequential LSTMs without syntax for multiple attractor cases, and worse than RNNGs for nearly all attractors; the gap is highest for multiple attractors.",
"This result showcases the importance of an explicit composition operator and hierarchical representations in identifying structural dependencies, as indicated by number agreement accuracy.",
"Our finding is consistent with the recent work of Yogatama et al.",
"(2018) , who find that introducing elements of hierarchical modeling through a stackstructured memory is beneficial for number agreement, outperforming LSTM language models and attention-augmented variants by increasing margins as the number of attractor grows.",
"Further Analysis In order to better interpret the results, we conduct further analysis into the perplexities of each model, followed by a discussion on the effect of incrementality constraints on the RNNG when predicting number agreement.",
"Perplexity.",
"To what extent does the success of RNNGs in the number agreement task with multiple attractors correlate with better performance under the perplexity metric?",
"We answer this question by using an importance sampling marginalization procedure (Dyer et al., 2016) to obtain an estimate of p(x) under both RNNGs and the sequential syntactic LSTM model.",
"Following Dyer et al.",
"(2016) , for each sentence on the validation set we sample 100 candidate trees from a discriminative model 11 as our proposal distribution.",
"As demonstrated in Table 3 , the LSTM language model has the lowest validation set perplexity despite substantially worse performance than RN-NGs in number agreement with multiple attractors, suggesting that lower perplexity is not neces-sarily correlated with number agreement success.",
"Incrementality constraints.",
"As the syntactic prefix was derived from a discriminative model that has access to unprocessed words, one potential concern is that this prefix might violate the incrementality constraints and benefit the RNNG over the LSTM language model.",
"To address this concern, we remark that the empirical evidence from Fig.",
"2 and Table 3 indicates that the LSTM language model without syntactic annotation outperforms the sequential LSTM with syntactic annotation in terms of both perplexity and number agreement throughout nearly all attractor settings, suggesting that the predicted syntactic prefix does not give any unfair advantage to the syntactic models.",
"Furthermore, we run an experiment where the syntactic prefix is instead derived from an incremental beam search procedure of Fried et al.",
"(2017) .",
"12 To this end, we take the highest scoring beam entry at the time that the verb is generated to be the syntactic prefix; this procedure is applied to both the correct and incorrect verb forms.",
"13 We then similarly compare the probabilities of the correct and incorrect verb form given each respective syntactic prefix to obtain number agreement accuracy.",
"Our finding suggests that using the fully incremental tree prefix leads to even better RNNG number agreement performance for four and five attractors, achieving 7.1% and 8.2% error rates, respectively, compared to 9.4% and 12% for the RNNG error rates in Fig.",
"2 .",
"Top-Down, Left-Corner, and Bottom-Up Traversals In this section, we propose two new variants of RNNGs that construct trees using a different con- 12 As the beam search procedure is time-consuming, we randomly sample 500 cases for each attractor and compute the number agreement accuracy on these samples.",
"13 Consequently, the correct and incorrect forms of the sentence might have different partial trees, as the highest scoring beam entries may be different for each alternative.",
"struction order than the top-down, left-to-right order used above.",
"These are a bottom-up construction order ( §4.1) and a left-corner construction order ( §4.2), analogous to the well-known parsing strategies (e.g.",
"Hale, 2014, chapter 3).",
"They differ from these classic strategies insofar as they do not announce the phrase-structural content of an entire branch at the same time, adopting instead a node-by-node enumeration reminescent of Markov Grammars (Charniak, 1997) .",
"This stepby-step arrangement extends to the derived string as well; since all variants generate words from left to right, the models can be compared using number agreement as a diagnostic.",
"14 Here we state our hypothesis on why the build order matters.",
"The three generation strategies represent different chain rule decompositions of the joint probability of strings and phrase-structure trees, thereby imposing different biases on the learner.",
"Earlier work in parsing has characterized the plausibility of top-down, left-corner, and bottom-up strategies as viable candidates of human sentence processing, especially in terms of memory constraints and human difficulties with center embedding constructions (Johnson-Laird, 1983; Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992 , inter alia), along with neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we cast the three strategies as models of language generation (Manning and Carpenter, 1997), and focus on the empirical question: which generation order has the most appropriate bias in modeling non-local structural dependencies in English?",
"These alternative orders organize the learning problem so as to yield intermediate states in generation that condition on different aspects of the grammatical structure.",
"In number agreement, this amounts to making an agreement controller, such as the word flowers in Fig.",
"3 , more or less salient.",
"If it is more salient, the model should be better-able to inflect the main verb in agreement with this controller, without getting distracted by the attractors.",
"The three proposed build orders are compared in Fig.",
"3 , showing the respective configurations (i.e.",
"the prefix) when generating the main verb in a sentence with a single attractor.",
"15 In ad-dition, we show concrete action sequences for a simpler sentence in each section.",
"Bottom-Up Traversal In bottom-up traversals, phrases are recursively constructed and labeled with the nonterminal type once all their daughters have been built, as illustrated in Fig.",
"4 .",
"Bottom-up traversals benefit from shorter stack depths compared to top-down due to the lack of incomplete nonterminals.",
"As the commitment to label the nonterminal type of a phrase is delayed until its constituents are complete, this means that the generation of a child node cannot condition on the label of its parent node.",
"In n-ary branching trees, bottom-up completion of constituents requires a procedure for determining how many of the most recent elements on the stack should be daughters of the node that is being constructed.",
"16 Conceptually, rather than having a single REDUCE operation as we have before, we have a complex REDUCE(X, n) operation that must determine the type of the constituent (i.e., X) as well as the number of daughters (i.e., n).",
"In step 5 of Fig.",
"4 , the newly formed NP constituent only covers the terminal worms, and neither the unattached terminal eats nor the constituent (NP The fox) is part of the new noun phrase.",
"We implement this extent decision using a stick-breaking construction-using the stack LSTM encoding, a single-layer feedforward network, and a logistic output layer-which decides whether the top element on the stack should be the leftmost child of the new constituent (i.e.",
"whether or not the new constituent is complete after popping the current topmost stack element), as illustrated in Fig.",
"5 .",
"If not, the process is then repeated after the topmost stack element is popped.",
"Once the extent of the new nonterminal has been decided, we parameterize the decision of the nonterminal label type; in Fig.",
"5 this is an NP.",
"A second difference to top-down generation is that when a single constituent remains on the stack, the sentence is not necessarily complete (see step 3 of Fig.",
"4 for examples where this happens).",
"We thus introduce an explicit STOP action (step 8, Fig.",
"4) , indicating the tree is complete, which is only assigned non-zero probability when the stack has a during the history of the full generation process vary considerably in the invariances and the kinds of actions they predict.",
"16 This mechanism is not necessary with strictly binary branching trees, since each new nonterminal always consists of the two children at the top of the stack.",
"Left-Corner Traversal Left-corner traversals combine some aspects of top-down and bottom-up processing.",
"As illustrated in Fig.",
"6 , this works by first generating the leftmost terminal of the tree, The (step 0), before proceeding bottom-up to predict its parent NP (step 1) and then top-down to predict the rest of its children (step 2).",
"A REDUCE action similarly calls the composition operator once the phrase is complete (e.g.",
"step 3).",
"The complete constituent (NP The fox) is the leftmost child of its parent node, thus an NT SW(S) action is done next (step 4).",
"The NT SW(X) action is similar to the NT(X) from the top-down generator, in that it introduces an open nonterminal node and must be matched later by a corresponding REDUCE operation, but, in addition, swaps the two topmost elements at the top of the stack.",
"This is necessary because the parent nonterminal node is not built until after its left-most child has been constructed.",
"In step 1 of Fig.",
"6 , with a single element The on the stack, the action NT SW(NP) adds the open nonterminal symbol NP to become the topmost stack element, but after applying the swap operator the stack now contains (NP | The (step 2).",
"Experiments We optimize the hyper-parameters of each RNNG variant using grid searches based on validation set perplexity.",
"Table 4 summarizes average stack depths and perplexities 17 on the Linzen et al.",
"(2016) validation set.",
"We evaluate each of the variants in terms of number agreement accuracy as an evidence of its suitability to model structural dependencies in English, presented in Table 5 .",
"To account for randomness in training, we report the error rate summary statistics of ten different runs.",
"Avg.",
"(±sdev)/min/max n=2 n=3 n=4 LM 5.8(±0.2)/5.5/6.0 9.6(±0.7)/8.8/10.1 14.1(±1.2)/13.0/15.3 TD 5.5(±0.4)/4.9/5.8 7.8(±0.6)/7.4/8.0 8.9(±1.1)/7.9/9.8 LC 5.4(±0.3)/5.2/5.5 8.2(±0.4)/7.9/8.7 9.9(±1.3)/8.8/11.5 BU 5.7(±0.3) 5.5/5.8 8.5(±0.7)/8.0/9.3 9.7(±1.1)/9.0/11.3 Table 5 : Number agreement error rates for topdown (TD), left-corner (LC), and bottom-up (BU) RNNGs, broken down by the number of attractors.",
"LM indicates the best sequential language model baseline ( §2).",
"We report the mean, standard deviation, and minimum/maximum of 10 different random seeds of each model.",
"Discussion.",
"In Table 5 , we focus on empirical results for cases where the structural dependencies matter the most, corresponding to cases with two, three, and four attractors.",
"All three RNNG variants outperform the sequential LSTM language model baseline for these cases.",
"Nevertheless, the top-down variant outperforms both left-corner and bottom-up strategies for difficult cases with three or more attractors, suggesting that the top-down strategy is most appropriately biased to model difficult number agreement dependencies in English.",
"We run an approximate randomization test by stratifying the output and permuting within each stratum (Yeh, 2000) and find that, for four attractors, the performance difference between the top-down RNNG and the other variants is statistically significant at p < 0.05.",
"The success of the top-down traversal in the domain of number-agreement prediction is consistent with a classical view in parsing that argues top-down parsing is the most human-like parsing strategy since it is the most anticipatory.",
"Only anticipatory representations, it is said, could explain the rapid, incremental processing that humans seem to exhibit (Marslen-Wilson, 1973; Tanenhaus et al., 1995) ; this line of thinking similarly motivates Charniak (2010) , among others.",
"While most work in this domain has been concerned with the parsing problem, our findings suggest that anticipatory mechanisms are also beneficial in capturing structural dependencies in language modeling.",
"We note that our results are achieved using models that, in theory, are able to condition on the entire derivation history, while earlier work in sentence processing has focused on cognitive memory considerations, such as the memory-bounded model of Schuler et al.",
"(2010) .",
"Conclusion Given enough capacity, LSTMs trained on language modeling objectives are able to learn syntax-sensitive dependencies, as evidenced by accurate number agreement accuracy with multiple attractors.",
"Despite this strong performance, we discover explicit modeling of structure does improve the model's ability to discover non-local structural dependencies when determining the distribution over subsequent word generation.",
"Recurrent neural network grammars (RNNGs), which jointly model phrase-structure trees and strings and employ an explicit composition operator, substantially outperform LSTM language models and syntactic language models without explicit compositions; this highlights the importance of a hierarchical inductive bias in capturing structural dependencies.",
"We explore the possibility that how the structure is built affects number agreement performance.",
"Through novel extensions to RNNGs that enable the use of left-corner and bottom-up generation strategies, we discover that this is indeed the case: the three RNNG variants have different generalization properties for number agreement, with the top-down traversal strategy performing best for cases with multiple attractors."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"4.3",
"5"
],
"paper_header_content": [
"Introduction",
"Number Agreement with LSTM Language Models",
"Number Agreement with RNNGs",
"Recurrent Neural Network Grammars",
"Experiments",
"Further Analysis",
"Top-Down, Left-Corner, and Bottom-Up Traversals",
"Bottom-Up Traversal",
"Left-Corner Traversal",
"Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-30#paper-1041#slide-5 | Linzen et al LSTM Number Agreement Error Rates | LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | [] |
GEM-SciDuet-train-30#paper-1041#slide-6 | 1041 | LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modeling Structure Makes Them Better | Language exhibits hierarchical structure, but recent work using a subject-verb agreement diagnostic argued that state-ofthe-art language models, LSTMs, fail to learn long-range syntax-sensitive dependencies. Using the same diagnostic, we show that, in fact, LSTMs do succeed in learning such dependencies-provided they have enough capacity. We then explore whether models that have access to explicit syntactic information learn agreement more effectively, and how the way in which this structural information is incorporated into the model impacts performance. We find that the mere presence of syntactic information does not improve accuracy, but when model architecture is determined by syntax, number agreement is improved. Further, we find that the choice of how syntactic structure is built affects how well number agreement is learned: top-down construction outperforms leftcorner and bottom-up variants in capturing long-distance structural dependencies. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189
],
"paper_content_text": [
"Introduction Recurrent neural networks (RNNs) are remarkably effective models of sequential data.",
"Recent years have witnessed the widespread adoption of recurrent architectures such as LSTMs (Hochreiter and Schmidhuber, 1997) in various NLP tasks, with state of the art results in language modeling and conditional generation tasks like machine translation (Bahdanau et al., 2015) and text summarization (See et al., 2017) .",
"Here we revisit the question asked by Linzen et al.",
"(2016) : as RNNs model word sequences without explicit notions of hierarchical structure, Figure 1 : An example of the number agreement task with two attractors and a subject-verb distance of five.",
"to what extent are these models able to learn non-local syntactic dependencies in natural language?",
"Identifying number agreement between subjects and verbs-especially in the presence of attractors-can be understood as a cognitivelymotivated probe that seeks to distinguish hierarchical theories from sequential ones, as models that rely on sequential cues like the most recent noun would favor the incorrect verb form.",
"We provide an example of this task in Fig.",
"1 , where the plural form of the verb have agrees with the distant subject parts, rather than the adjacent attractors (underlined) of the singular form.",
"Contrary to the findings of Linzen et al.",
"(2016) , our experiments suggest that sequential LSTMs are able to capture structural dependencies to a large extent, even for cases with multiple attractors ( §2).",
"Our finding suggests that network capacity plays a crucial role in capturing structural dependencies with multiple attractors.",
"Nevertheless, we find that a strong character LSTM language model-which lacks explicit word representation and has to capture much longer sequential dependencies in order to learn non-local structural dependencies effectively-performs much worse in the number agreement task.",
"Given the strong performance of word-based LSTM language models, are there are any substantial benefits, in terms of number agreement accuracy, to explicitly modeling hierarchical structures as an inductive bias?",
"We discover that a certain class of LSTM language models that explicitly models syntactic structures, the recurrent neural network grammars (Dyer et al., 2016, RN-NGs) , considerably outperforms sequential LSTM language models for cases with multiple attractors ( §3).",
"We present experiments affirming that this gain is due to an explicit composition operator rather than the presence of predicted syntactic annotations.",
"Rather surprisingly, syntactic LSTM language models without explicit composition have no advantage over sequential LSTMs that operate on word sequences, although these models can nevertheless be excellent predictors of phrase structures (Choe and Charniak, 2016) .",
"Having established the importance of modeling structures, we explore the hypothesis that how we build the structure affects the model's ability to identify structural dependencies in English.",
"As RNNGs build phrase-structure trees through top-down operations, we propose extensions to the structure-building sequences and model architecture that enable left-corner (Henderson, 2003 (Henderson, , 2004 and bottom-up (Chelba and Jelinek, 2000; Emami and Jelinek, 2005) generation orders ( §4).",
"Extensive prior work has characterized topdown, left-corner, and bottom-up parsing strategies in terms of cognitive plausibility (Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992) and neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we move away from the realm of parsing and evaluate the three strategies as models of generation instead, and address the following empirical question: which generation order is most appropriately biased to model structural dependencies in English, as indicated by number agreement accuracy?",
"Our key finding is that the top-down generation outperforms left-corner and bottom-up variants for difficult cases with multiple attractors.",
"In theory, the three traversal strategies approximate the same chain rule that decompose the joint probability of words and phrase-structure trees, denoted as p(x, y), differently and as such will impose different biases on the learner.",
"In §4.3, we show that the three variants achieve similar perplexities on a held-out validation set.",
"As we observe different patterns in number agreement, this demonstrates that while perplexity can be a useful diagnostic tool, it may not be sensitive enough for comparing models in terms of how well they capture grammatical intuitions.",
"Number Agreement with LSTM Language Models We revisit the number agreement task with LSTMs trained on language modeling objectives, as proposed by Linzen et al.",
"(2016) .",
"Experimental Settings.",
"We use the same parsed Wikipedia corpus, verb inflectors, preprocessing steps, and dataset split as Linzen et al.",
"(2016) .",
"1 Word types beyond the most frequent 10,000 are converted to their respective POS tags.",
"We summarize the corpus statistics of the dataset, along with the test set distribution of the number of attractors, in Table 1 .",
"Similar to Linzen et al.",
"(2016) , we only include test cases where all intervening nouns are of the opposite number forms than the subject noun.",
"All models are implemented using the DyNet library (Neubig et al., 2017) .",
"Training was done using a language modeling objective that predicts the next word given the prefix; at test time we compute agreement error rates by comparing the probability of the correct verb form with the incorrect one.",
"We report performance of a few different LSTM hidden layer configurations, while other hyper-parameters are selected based on a grid search.",
"2 Following Linzen Table 2 indicates that, given enough capacity, LSTM language models without explicit syntactic supervision are able to perform well in number agreement.",
"For cases with multiple attractors, we observe that the LSTM language model with 50 hidden units trails behind its larger counterparts by a substantial margin despite comparable performance for zero attractor cases, suggesting that network capacity plays an especially important role in propagating relevant structural information across a large number of steps.",
"5 Our experiment independently derives the same finding as the recent work of Gulordava et al.",
"(2018) , who also find that LSTMs trained with language modeling objectives are able to learn number agreement well; here we additionally identify model capacity as one of the reasons for the discrepancy with the Linzen et al.",
"(2016) results.",
"While the pretrained large-scale language model of Jozefowicz et al.",
"(2016) has certain advantages in terms of model capacity, more training data, and richer vocabulary, we suspect that the poorer performance is due to differences between their training domain and the number agreement testing domain, although the model still performs reasonably well in the number agreement test set.",
"Prior work has confirmed the notion that, in many cases, statistical models are able to achieve good performance under some aggregate metric by overfitting to patterns that are predictive in most cases, often at the expense of more difficult, infrequent instances that require deeper language understanding abilities (Rimell et al., 2009; Jia and Liang, 2017) .",
"In the vast majority of cases, structural dependencies between subjects and verbs highly overlap with sequential dependencies (Table 1).",
"Nevertheless, the fact that number agreement accuracy gets worse as the number of attractors increases is consistent with a sequential recency bias in LSTMs: under this conjecture, identifying the correct structural dependency becomes harder when there are more adjacent nouns of different number forms than the true subject.",
"If the sequential recency conjecture is correct, then LSTMs would perform worse when the structural dependency is more distant in the sequences, compared to cases where the structural dependency is more adjacent.",
"We empirically test this conjecture by running a strong character-based LSTM language model of that achieved state of the art results on EnWiki8 from the Hutter Prize dataset (Hutter, 2012) , with 1,800 hidden units and 10 million parameters.",
"The character LSTM is trained, validated, and tested 6 on the same split of the Linzen et al.",
"(2016) number agreement dataset.",
"A priori, we expect that number agreement is harder for character LSTMs for two reasons.",
"First, character LSTMs lack explicit word representaforms much better for cases with multiple attractors.",
"tions, thus succeeding in this task requires identifying structural dependencies between two sequences of character tokens, while word-based LSTMs only need to resolve dependencies between word tokens.",
"Second, by nature of modeling characters, non-local structural dependencies are sequentially further apart than in the wordbased language model.",
"On the other hand, character LSTMs have the ability to exploit and share informative morphological cues, such as the fact that plural nouns in English tend to end with 's'.",
"As demonstrated on the last row of Table 2 , we find that the character LSTM language model performs much worse at number agreement with multiple attractors compared to its word-based counterparts.",
"This finding is consistent with that of Sennrich (2017) , who find that character-level decoders in neural machine translation perform worse than subword models in capturing morphosyntactic agreement.",
"To some extent, our finding demonstrates the limitations that character LSTMs face in learning structure from language modeling objectives, despite earlier evidence that character LSTM language models are able to implicitly acquire a lexicon (Le Godais et al., 2017) .",
"Number Agreement with RNNGs Given the strong performance of sequential LSTMs in number agreement, is there any further benefit to explicitly modeling hierarchical structures?",
"We focus on recurrent neural network grammars (Dyer et al., 2016, RNNGs) , which jointly model the probability of phrase-structure trees and strings, p(x, y), through structurebuilding actions and explicit compositions for representing completed constituents.",
"Our choice of RNNGs is motivated by the findings of Kuncoro et al.",
"(2017) , who find evidence for syntactic headedness in RNNG phrasal representations.",
"Intuitively, the ability to learn heads is beneficial for this task, as the representation for the noun phrase \"The flowers in the vase\" would be similar to the syntactic head flowers rather than vase.",
"In some sense, the composition operator can be understood as injecting a structural recency bias into the model design, as subjects and verbs that are sequentially apart are encouraged to be close together in the RNNGs' representation.",
"Recurrent Neural Network Grammars RNNGs (Dyer et al., 2016) are language models that estimate the joint probability of string terminals and phrase-structure tree nonterminals.",
"Here we use stack-only RNNGs that achieve better perplexity and parsing performance (Kuncoro et al., 2017) .",
"Given the current stack configuration, the objective function of RNNGs is to predict the correct structure-building operation according to a top-down, left-to-right traversal of the phrasestructure tree; a partial traversal for the input sentence \"The flowers in the vase are blooming\" is illustrated in Fig.",
"3(a) .",
"7 The structural inductive bias of RNNGs derives from an explicit composition operator that represents completed constituents; for instance, the constituent (NP The flowers) is represented by a single composite element on the stack, rather than as four separate symbols.",
"During each REDUCE action, the topmost stack elements that belong to the new constituent are popped from the stack and then composed by the composition function; the composed symbol is then pushed back into the stack.",
"The model is trained in an end-to-end manner by minimizing the cross-entropy loss relative to a sample of gold trees.",
"Experiments Here we summarize the experimental settings of running RNNGs on the number agreement dataset and discuss the empirical findings.",
"Experimental settings.",
"We obtain phrasestructure trees for the Linzen et al.",
"(2016) dataset using a publicly available discriminative model 8 trained on the Penn Treebank (Marcus et al., 1993) .",
"At training time, we use these predicted trees to derive action sequences on the training set, and train the RNNG model on these sequences.",
"9 At test time, we compare the probabilities of the correct and incorrect verb forms given the prefix, which now includes both nonterminal and terminal symbols.",
"An example of the stack contents (i.e.",
"the prefix) when predicting the verb is provided in Fig.",
"3(a) .",
"We similarly run a grid search over the same hyper-parameter range as the sequential LSTM and compare the results with the strongest sequential LSTM baseline from §2.",
"Figure 2 : Number agreement error rates for sequential LSTM language models (left), sequential syntactic LSTM language models (Choe and Charniak, 2016, center), and RNNGs (right).",
"Discussion.",
"Fig.",
"2 shows that RNNGs (rightmost) achieve much better number agreement accuracy compared to LSTM language models (leftmost) for difficult cases with four and five attractors, with around 30% error rate reductions, along with a 13% error rate reduction (from 9% to 7.8%) for three attractors.",
"We attribute the slightly worse performance of RNNGs on cases with zero and one attractor to the presence of intervening structure-building actions that separate the subject and the verb, such as a REDUCE (step 6 in Fig.",
"3(a) ) action to complete the noun phrase and at least one action to predict a verb phrase (step 15 in Fig.",
"3(a) ) before the verb itself is introduced, while LSTM language models benefit from shorter dependencies for zero and one attractor cases.",
"The performance gain of RNNGs might arise from two potential causes.",
"First, RNNGs have access to predicted syntactic annotations, while LSTM language models operate solely on word sequences.",
"Second, RNNGs incorporate explicit compositions, which encourage hierarhical representations and potentially the discovery of syntactic (rather than sequential) dependencies.",
"Would LSTMs that have access to syntactic annotations, but without the explicit composition function, benefit from the same performance gain as RNNGs?",
"To answer this question, we run sequential LSTMs over the same phrase-structure trees (Choe and Charniak, 2016) , similarly estimating the joint probability of phrase-structure nonterminals and string terminals but without an explicit composition operator.",
"Taking the example in Fig.",
"3(a) , the sequential syntactic LSTM would have fifteen 10 symbols on the LSTM when predicting the verb, as opposed to three symbols in the case of RNNGs' stack LSTM.",
"In theory, the sequential LSTM over the phrase-structure trees (Choe and Charniak, 2016) may be able to incorporate a similar, albeit implicit, composition process as RNNGs and consequently derive similarly syntactic heads, although there is no inductive bias that explicitly encourages such process.",
"Fig.",
"2 suggests that the sequential syntactic LSTMs (center) perform comparably with sequential LSTMs without syntax for multiple attractor cases, and worse than RNNGs for nearly all attractors; the gap is highest for multiple attractors.",
"This result showcases the importance of an explicit composition operator and hierarchical representations in identifying structural dependencies, as indicated by number agreement accuracy.",
"Our finding is consistent with the recent work of Yogatama et al.",
"(2018) , who find that introducing elements of hierarchical modeling through a stackstructured memory is beneficial for number agreement, outperforming LSTM language models and attention-augmented variants by increasing margins as the number of attractor grows.",
"Further Analysis In order to better interpret the results, we conduct further analysis into the perplexities of each model, followed by a discussion on the effect of incrementality constraints on the RNNG when predicting number agreement.",
"Perplexity.",
"To what extent does the success of RNNGs in the number agreement task with multiple attractors correlate with better performance under the perplexity metric?",
"We answer this question by using an importance sampling marginalization procedure (Dyer et al., 2016) to obtain an estimate of p(x) under both RNNGs and the sequential syntactic LSTM model.",
"Following Dyer et al.",
"(2016) , for each sentence on the validation set we sample 100 candidate trees from a discriminative model 11 as our proposal distribution.",
"As demonstrated in Table 3 , the LSTM language model has the lowest validation set perplexity despite substantially worse performance than RN-NGs in number agreement with multiple attractors, suggesting that lower perplexity is not neces-sarily correlated with number agreement success.",
"Incrementality constraints.",
"As the syntactic prefix was derived from a discriminative model that has access to unprocessed words, one potential concern is that this prefix might violate the incrementality constraints and benefit the RNNG over the LSTM language model.",
"To address this concern, we remark that the empirical evidence from Fig.",
"2 and Table 3 indicates that the LSTM language model without syntactic annotation outperforms the sequential LSTM with syntactic annotation in terms of both perplexity and number agreement throughout nearly all attractor settings, suggesting that the predicted syntactic prefix does not give any unfair advantage to the syntactic models.",
"Furthermore, we run an experiment where the syntactic prefix is instead derived from an incremental beam search procedure of Fried et al.",
"(2017) .",
"12 To this end, we take the highest scoring beam entry at the time that the verb is generated to be the syntactic prefix; this procedure is applied to both the correct and incorrect verb forms.",
"13 We then similarly compare the probabilities of the correct and incorrect verb form given each respective syntactic prefix to obtain number agreement accuracy.",
"Our finding suggests that using the fully incremental tree prefix leads to even better RNNG number agreement performance for four and five attractors, achieving 7.1% and 8.2% error rates, respectively, compared to 9.4% and 12% for the RNNG error rates in Fig.",
"2 .",
"Top-Down, Left-Corner, and Bottom-Up Traversals In this section, we propose two new variants of RNNGs that construct trees using a different con- 12 As the beam search procedure is time-consuming, we randomly sample 500 cases for each attractor and compute the number agreement accuracy on these samples.",
"13 Consequently, the correct and incorrect forms of the sentence might have different partial trees, as the highest scoring beam entries may be different for each alternative.",
"struction order than the top-down, left-to-right order used above.",
"These are a bottom-up construction order ( §4.1) and a left-corner construction order ( §4.2), analogous to the well-known parsing strategies (e.g.",
"Hale, 2014, chapter 3).",
"They differ from these classic strategies insofar as they do not announce the phrase-structural content of an entire branch at the same time, adopting instead a node-by-node enumeration reminescent of Markov Grammars (Charniak, 1997) .",
"This stepby-step arrangement extends to the derived string as well; since all variants generate words from left to right, the models can be compared using number agreement as a diagnostic.",
"14 Here we state our hypothesis on why the build order matters.",
"The three generation strategies represent different chain rule decompositions of the joint probability of strings and phrase-structure trees, thereby imposing different biases on the learner.",
"Earlier work in parsing has characterized the plausibility of top-down, left-corner, and bottom-up strategies as viable candidates of human sentence processing, especially in terms of memory constraints and human difficulties with center embedding constructions (Johnson-Laird, 1983; Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992 , inter alia), along with neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we cast the three strategies as models of language generation (Manning and Carpenter, 1997), and focus on the empirical question: which generation order has the most appropriate bias in modeling non-local structural dependencies in English?",
"These alternative orders organize the learning problem so as to yield intermediate states in generation that condition on different aspects of the grammatical structure.",
"In number agreement, this amounts to making an agreement controller, such as the word flowers in Fig.",
"3 , more or less salient.",
"If it is more salient, the model should be better-able to inflect the main verb in agreement with this controller, without getting distracted by the attractors.",
"The three proposed build orders are compared in Fig.",
"3 , showing the respective configurations (i.e.",
"the prefix) when generating the main verb in a sentence with a single attractor.",
"15 In ad-dition, we show concrete action sequences for a simpler sentence in each section.",
"Bottom-Up Traversal In bottom-up traversals, phrases are recursively constructed and labeled with the nonterminal type once all their daughters have been built, as illustrated in Fig.",
"4 .",
"Bottom-up traversals benefit from shorter stack depths compared to top-down due to the lack of incomplete nonterminals.",
"As the commitment to label the nonterminal type of a phrase is delayed until its constituents are complete, this means that the generation of a child node cannot condition on the label of its parent node.",
"In n-ary branching trees, bottom-up completion of constituents requires a procedure for determining how many of the most recent elements on the stack should be daughters of the node that is being constructed.",
"16 Conceptually, rather than having a single REDUCE operation as we have before, we have a complex REDUCE(X, n) operation that must determine the type of the constituent (i.e., X) as well as the number of daughters (i.e., n).",
"In step 5 of Fig.",
"4 , the newly formed NP constituent only covers the terminal worms, and neither the unattached terminal eats nor the constituent (NP The fox) is part of the new noun phrase.",
"We implement this extent decision using a stick-breaking construction-using the stack LSTM encoding, a single-layer feedforward network, and a logistic output layer-which decides whether the top element on the stack should be the leftmost child of the new constituent (i.e.",
"whether or not the new constituent is complete after popping the current topmost stack element), as illustrated in Fig.",
"5 .",
"If not, the process is then repeated after the topmost stack element is popped.",
"Once the extent of the new nonterminal has been decided, we parameterize the decision of the nonterminal label type; in Fig.",
"5 this is an NP.",
"A second difference to top-down generation is that when a single constituent remains on the stack, the sentence is not necessarily complete (see step 3 of Fig.",
"4 for examples where this happens).",
"We thus introduce an explicit STOP action (step 8, Fig.",
"4) , indicating the tree is complete, which is only assigned non-zero probability when the stack has a during the history of the full generation process vary considerably in the invariances and the kinds of actions they predict.",
"16 This mechanism is not necessary with strictly binary branching trees, since each new nonterminal always consists of the two children at the top of the stack.",
"Left-Corner Traversal Left-corner traversals combine some aspects of top-down and bottom-up processing.",
"As illustrated in Fig.",
"6 , this works by first generating the leftmost terminal of the tree, The (step 0), before proceeding bottom-up to predict its parent NP (step 1) and then top-down to predict the rest of its children (step 2).",
"A REDUCE action similarly calls the composition operator once the phrase is complete (e.g.",
"step 3).",
"The complete constituent (NP The fox) is the leftmost child of its parent node, thus an NT SW(S) action is done next (step 4).",
"The NT SW(X) action is similar to the NT(X) from the top-down generator, in that it introduces an open nonterminal node and must be matched later by a corresponding REDUCE operation, but, in addition, swaps the two topmost elements at the top of the stack.",
"This is necessary because the parent nonterminal node is not built until after its left-most child has been constructed.",
"In step 1 of Fig.",
"6 , with a single element The on the stack, the action NT SW(NP) adds the open nonterminal symbol NP to become the topmost stack element, but after applying the swap operator the stack now contains (NP | The (step 2).",
"Experiments We optimize the hyper-parameters of each RNNG variant using grid searches based on validation set perplexity.",
"Table 4 summarizes average stack depths and perplexities 17 on the Linzen et al.",
"(2016) validation set.",
"We evaluate each of the variants in terms of number agreement accuracy as an evidence of its suitability to model structural dependencies in English, presented in Table 5 .",
"To account for randomness in training, we report the error rate summary statistics of ten different runs.",
"Avg.",
"(±sdev)/min/max n=2 n=3 n=4 LM 5.8(±0.2)/5.5/6.0 9.6(±0.7)/8.8/10.1 14.1(±1.2)/13.0/15.3 TD 5.5(±0.4)/4.9/5.8 7.8(±0.6)/7.4/8.0 8.9(±1.1)/7.9/9.8 LC 5.4(±0.3)/5.2/5.5 8.2(±0.4)/7.9/8.7 9.9(±1.3)/8.8/11.5 BU 5.7(±0.3) 5.5/5.8 8.5(±0.7)/8.0/9.3 9.7(±1.1)/9.0/11.3 Table 5 : Number agreement error rates for topdown (TD), left-corner (LC), and bottom-up (BU) RNNGs, broken down by the number of attractors.",
"LM indicates the best sequential language model baseline ( §2).",
"We report the mean, standard deviation, and minimum/maximum of 10 different random seeds of each model.",
"Discussion.",
"In Table 5 , we focus on empirical results for cases where the structural dependencies matter the most, corresponding to cases with two, three, and four attractors.",
"All three RNNG variants outperform the sequential LSTM language model baseline for these cases.",
"Nevertheless, the top-down variant outperforms both left-corner and bottom-up strategies for difficult cases with three or more attractors, suggesting that the top-down strategy is most appropriately biased to model difficult number agreement dependencies in English.",
"We run an approximate randomization test by stratifying the output and permuting within each stratum (Yeh, 2000) and find that, for four attractors, the performance difference between the top-down RNNG and the other variants is statistically significant at p < 0.05.",
"The success of the top-down traversal in the domain of number-agreement prediction is consistent with a classical view in parsing that argues top-down parsing is the most human-like parsing strategy since it is the most anticipatory.",
"Only anticipatory representations, it is said, could explain the rapid, incremental processing that humans seem to exhibit (Marslen-Wilson, 1973; Tanenhaus et al., 1995) ; this line of thinking similarly motivates Charniak (2010) , among others.",
"While most work in this domain has been concerned with the parsing problem, our findings suggest that anticipatory mechanisms are also beneficial in capturing structural dependencies in language modeling.",
"We note that our results are achieved using models that, in theory, are able to condition on the entire derivation history, while earlier work in sentence processing has focused on cognitive memory considerations, such as the memory-bounded model of Schuler et al.",
"(2010) .",
"Conclusion Given enough capacity, LSTMs trained on language modeling objectives are able to learn syntax-sensitive dependencies, as evidenced by accurate number agreement accuracy with multiple attractors.",
"Despite this strong performance, we discover explicit modeling of structure does improve the model's ability to discover non-local structural dependencies when determining the distribution over subsequent word generation.",
"Recurrent neural network grammars (RNNGs), which jointly model phrase-structure trees and strings and employ an explicit composition operator, substantially outperform LSTM language models and syntactic language models without explicit compositions; this highlights the importance of a hierarchical inductive bias in capturing structural dependencies.",
"We explore the possibility that how the structure is built affects number agreement performance.",
"Through novel extensions to RNNGs that enable the use of left-corner and bottom-up generation strategies, we discover that this is indeed the case: the three RNNG variants have different generalization properties for number agreement, with the top-down traversal strategy performing best for cases with multiple attractors."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"4.3",
"5"
],
"paper_header_content": [
"Introduction",
"Number Agreement with LSTM Language Models",
"Number Agreement with RNNGs",
"Recurrent Neural Network Grammars",
"Experiments",
"Further Analysis",
"Top-Down, Left-Corner, and Bottom-Up Traversals",
"Bottom-Up Traversal",
"Left-Corner Traversal",
"Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-30#paper-1041#slide-6 | Small LSTM Number Agreement Error Rates | LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | [] |
GEM-SciDuet-train-30#paper-1041#slide-7 | 1041 | LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modeling Structure Makes Them Better | Language exhibits hierarchical structure, but recent work using a subject-verb agreement diagnostic argued that state-ofthe-art language models, LSTMs, fail to learn long-range syntax-sensitive dependencies. Using the same diagnostic, we show that, in fact, LSTMs do succeed in learning such dependencies-provided they have enough capacity. We then explore whether models that have access to explicit syntactic information learn agreement more effectively, and how the way in which this structural information is incorporated into the model impacts performance. We find that the mere presence of syntactic information does not improve accuracy, but when model architecture is determined by syntax, number agreement is improved. Further, we find that the choice of how syntactic structure is built affects how well number agreement is learned: top-down construction outperforms leftcorner and bottom-up variants in capturing long-distance structural dependencies. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189
],
"paper_content_text": [
"Introduction Recurrent neural networks (RNNs) are remarkably effective models of sequential data.",
"Recent years have witnessed the widespread adoption of recurrent architectures such as LSTMs (Hochreiter and Schmidhuber, 1997) in various NLP tasks, with state of the art results in language modeling and conditional generation tasks like machine translation (Bahdanau et al., 2015) and text summarization (See et al., 2017) .",
"Here we revisit the question asked by Linzen et al.",
"(2016) : as RNNs model word sequences without explicit notions of hierarchical structure, Figure 1 : An example of the number agreement task with two attractors and a subject-verb distance of five.",
"to what extent are these models able to learn non-local syntactic dependencies in natural language?",
"Identifying number agreement between subjects and verbs-especially in the presence of attractors-can be understood as a cognitivelymotivated probe that seeks to distinguish hierarchical theories from sequential ones, as models that rely on sequential cues like the most recent noun would favor the incorrect verb form.",
"We provide an example of this task in Fig.",
"1 , where the plural form of the verb have agrees with the distant subject parts, rather than the adjacent attractors (underlined) of the singular form.",
"Contrary to the findings of Linzen et al.",
"(2016) , our experiments suggest that sequential LSTMs are able to capture structural dependencies to a large extent, even for cases with multiple attractors ( §2).",
"Our finding suggests that network capacity plays a crucial role in capturing structural dependencies with multiple attractors.",
"Nevertheless, we find that a strong character LSTM language model-which lacks explicit word representation and has to capture much longer sequential dependencies in order to learn non-local structural dependencies effectively-performs much worse in the number agreement task.",
"Given the strong performance of word-based LSTM language models, are there are any substantial benefits, in terms of number agreement accuracy, to explicitly modeling hierarchical structures as an inductive bias?",
"We discover that a certain class of LSTM language models that explicitly models syntactic structures, the recurrent neural network grammars (Dyer et al., 2016, RN-NGs) , considerably outperforms sequential LSTM language models for cases with multiple attractors ( §3).",
"We present experiments affirming that this gain is due to an explicit composition operator rather than the presence of predicted syntactic annotations.",
"Rather surprisingly, syntactic LSTM language models without explicit composition have no advantage over sequential LSTMs that operate on word sequences, although these models can nevertheless be excellent predictors of phrase structures (Choe and Charniak, 2016) .",
"Having established the importance of modeling structures, we explore the hypothesis that how we build the structure affects the model's ability to identify structural dependencies in English.",
"As RNNGs build phrase-structure trees through top-down operations, we propose extensions to the structure-building sequences and model architecture that enable left-corner (Henderson, 2003 (Henderson, , 2004 and bottom-up (Chelba and Jelinek, 2000; Emami and Jelinek, 2005) generation orders ( §4).",
"Extensive prior work has characterized topdown, left-corner, and bottom-up parsing strategies in terms of cognitive plausibility (Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992) and neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we move away from the realm of parsing and evaluate the three strategies as models of generation instead, and address the following empirical question: which generation order is most appropriately biased to model structural dependencies in English, as indicated by number agreement accuracy?",
"Our key finding is that the top-down generation outperforms left-corner and bottom-up variants for difficult cases with multiple attractors.",
"In theory, the three traversal strategies approximate the same chain rule that decompose the joint probability of words and phrase-structure trees, denoted as p(x, y), differently and as such will impose different biases on the learner.",
"In §4.3, we show that the three variants achieve similar perplexities on a held-out validation set.",
"As we observe different patterns in number agreement, this demonstrates that while perplexity can be a useful diagnostic tool, it may not be sensitive enough for comparing models in terms of how well they capture grammatical intuitions.",
"Number Agreement with LSTM Language Models We revisit the number agreement task with LSTMs trained on language modeling objectives, as proposed by Linzen et al.",
"(2016) .",
"Experimental Settings.",
"We use the same parsed Wikipedia corpus, verb inflectors, preprocessing steps, and dataset split as Linzen et al.",
"(2016) .",
"1 Word types beyond the most frequent 10,000 are converted to their respective POS tags.",
"We summarize the corpus statistics of the dataset, along with the test set distribution of the number of attractors, in Table 1 .",
"Similar to Linzen et al.",
"(2016) , we only include test cases where all intervening nouns are of the opposite number forms than the subject noun.",
"All models are implemented using the DyNet library (Neubig et al., 2017) .",
"Training was done using a language modeling objective that predicts the next word given the prefix; at test time we compute agreement error rates by comparing the probability of the correct verb form with the incorrect one.",
"We report performance of a few different LSTM hidden layer configurations, while other hyper-parameters are selected based on a grid search.",
"2 Following Linzen Table 2 indicates that, given enough capacity, LSTM language models without explicit syntactic supervision are able to perform well in number agreement.",
"For cases with multiple attractors, we observe that the LSTM language model with 50 hidden units trails behind its larger counterparts by a substantial margin despite comparable performance for zero attractor cases, suggesting that network capacity plays an especially important role in propagating relevant structural information across a large number of steps.",
"5 Our experiment independently derives the same finding as the recent work of Gulordava et al.",
"(2018) , who also find that LSTMs trained with language modeling objectives are able to learn number agreement well; here we additionally identify model capacity as one of the reasons for the discrepancy with the Linzen et al.",
"(2016) results.",
"While the pretrained large-scale language model of Jozefowicz et al.",
"(2016) has certain advantages in terms of model capacity, more training data, and richer vocabulary, we suspect that the poorer performance is due to differences between their training domain and the number agreement testing domain, although the model still performs reasonably well in the number agreement test set.",
"Prior work has confirmed the notion that, in many cases, statistical models are able to achieve good performance under some aggregate metric by overfitting to patterns that are predictive in most cases, often at the expense of more difficult, infrequent instances that require deeper language understanding abilities (Rimell et al., 2009; Jia and Liang, 2017) .",
"In the vast majority of cases, structural dependencies between subjects and verbs highly overlap with sequential dependencies (Table 1).",
"Nevertheless, the fact that number agreement accuracy gets worse as the number of attractors increases is consistent with a sequential recency bias in LSTMs: under this conjecture, identifying the correct structural dependency becomes harder when there are more adjacent nouns of different number forms than the true subject.",
"If the sequential recency conjecture is correct, then LSTMs would perform worse when the structural dependency is more distant in the sequences, compared to cases where the structural dependency is more adjacent.",
"We empirically test this conjecture by running a strong character-based LSTM language model of that achieved state of the art results on EnWiki8 from the Hutter Prize dataset (Hutter, 2012) , with 1,800 hidden units and 10 million parameters.",
"The character LSTM is trained, validated, and tested 6 on the same split of the Linzen et al.",
"(2016) number agreement dataset.",
"A priori, we expect that number agreement is harder for character LSTMs for two reasons.",
"First, character LSTMs lack explicit word representaforms much better for cases with multiple attractors.",
"tions, thus succeeding in this task requires identifying structural dependencies between two sequences of character tokens, while word-based LSTMs only need to resolve dependencies between word tokens.",
"Second, by nature of modeling characters, non-local structural dependencies are sequentially further apart than in the wordbased language model.",
"On the other hand, character LSTMs have the ability to exploit and share informative morphological cues, such as the fact that plural nouns in English tend to end with 's'.",
"As demonstrated on the last row of Table 2 , we find that the character LSTM language model performs much worse at number agreement with multiple attractors compared to its word-based counterparts.",
"This finding is consistent with that of Sennrich (2017) , who find that character-level decoders in neural machine translation perform worse than subword models in capturing morphosyntactic agreement.",
"To some extent, our finding demonstrates the limitations that character LSTMs face in learning structure from language modeling objectives, despite earlier evidence that character LSTM language models are able to implicitly acquire a lexicon (Le Godais et al., 2017) .",
"Number Agreement with RNNGs Given the strong performance of sequential LSTMs in number agreement, is there any further benefit to explicitly modeling hierarchical structures?",
"We focus on recurrent neural network grammars (Dyer et al., 2016, RNNGs) , which jointly model the probability of phrase-structure trees and strings, p(x, y), through structurebuilding actions and explicit compositions for representing completed constituents.",
"Our choice of RNNGs is motivated by the findings of Kuncoro et al.",
"(2017) , who find evidence for syntactic headedness in RNNG phrasal representations.",
"Intuitively, the ability to learn heads is beneficial for this task, as the representation for the noun phrase \"The flowers in the vase\" would be similar to the syntactic head flowers rather than vase.",
"In some sense, the composition operator can be understood as injecting a structural recency bias into the model design, as subjects and verbs that are sequentially apart are encouraged to be close together in the RNNGs' representation.",
"Recurrent Neural Network Grammars RNNGs (Dyer et al., 2016) are language models that estimate the joint probability of string terminals and phrase-structure tree nonterminals.",
"Here we use stack-only RNNGs that achieve better perplexity and parsing performance (Kuncoro et al., 2017) .",
"Given the current stack configuration, the objective function of RNNGs is to predict the correct structure-building operation according to a top-down, left-to-right traversal of the phrasestructure tree; a partial traversal for the input sentence \"The flowers in the vase are blooming\" is illustrated in Fig.",
"3(a) .",
"7 The structural inductive bias of RNNGs derives from an explicit composition operator that represents completed constituents; for instance, the constituent (NP The flowers) is represented by a single composite element on the stack, rather than as four separate symbols.",
"During each REDUCE action, the topmost stack elements that belong to the new constituent are popped from the stack and then composed by the composition function; the composed symbol is then pushed back into the stack.",
"The model is trained in an end-to-end manner by minimizing the cross-entropy loss relative to a sample of gold trees.",
"Experiments Here we summarize the experimental settings of running RNNGs on the number agreement dataset and discuss the empirical findings.",
"Experimental settings.",
"We obtain phrasestructure trees for the Linzen et al.",
"(2016) dataset using a publicly available discriminative model 8 trained on the Penn Treebank (Marcus et al., 1993) .",
"At training time, we use these predicted trees to derive action sequences on the training set, and train the RNNG model on these sequences.",
"9 At test time, we compare the probabilities of the correct and incorrect verb forms given the prefix, which now includes both nonterminal and terminal symbols.",
"An example of the stack contents (i.e.",
"the prefix) when predicting the verb is provided in Fig.",
"3(a) .",
"We similarly run a grid search over the same hyper-parameter range as the sequential LSTM and compare the results with the strongest sequential LSTM baseline from §2.",
"Figure 2 : Number agreement error rates for sequential LSTM language models (left), sequential syntactic LSTM language models (Choe and Charniak, 2016, center), and RNNGs (right).",
"Discussion.",
"Fig.",
"2 shows that RNNGs (rightmost) achieve much better number agreement accuracy compared to LSTM language models (leftmost) for difficult cases with four and five attractors, with around 30% error rate reductions, along with a 13% error rate reduction (from 9% to 7.8%) for three attractors.",
"We attribute the slightly worse performance of RNNGs on cases with zero and one attractor to the presence of intervening structure-building actions that separate the subject and the verb, such as a REDUCE (step 6 in Fig.",
"3(a) ) action to complete the noun phrase and at least one action to predict a verb phrase (step 15 in Fig.",
"3(a) ) before the verb itself is introduced, while LSTM language models benefit from shorter dependencies for zero and one attractor cases.",
"The performance gain of RNNGs might arise from two potential causes.",
"First, RNNGs have access to predicted syntactic annotations, while LSTM language models operate solely on word sequences.",
"Second, RNNGs incorporate explicit compositions, which encourage hierarhical representations and potentially the discovery of syntactic (rather than sequential) dependencies.",
"Would LSTMs that have access to syntactic annotations, but without the explicit composition function, benefit from the same performance gain as RNNGs?",
"To answer this question, we run sequential LSTMs over the same phrase-structure trees (Choe and Charniak, 2016) , similarly estimating the joint probability of phrase-structure nonterminals and string terminals but without an explicit composition operator.",
"Taking the example in Fig.",
"3(a) , the sequential syntactic LSTM would have fifteen 10 symbols on the LSTM when predicting the verb, as opposed to three symbols in the case of RNNGs' stack LSTM.",
"In theory, the sequential LSTM over the phrase-structure trees (Choe and Charniak, 2016) may be able to incorporate a similar, albeit implicit, composition process as RNNGs and consequently derive similarly syntactic heads, although there is no inductive bias that explicitly encourages such process.",
"Fig.",
"2 suggests that the sequential syntactic LSTMs (center) perform comparably with sequential LSTMs without syntax for multiple attractor cases, and worse than RNNGs for nearly all attractors; the gap is highest for multiple attractors.",
"This result showcases the importance of an explicit composition operator and hierarchical representations in identifying structural dependencies, as indicated by number agreement accuracy.",
"Our finding is consistent with the recent work of Yogatama et al.",
"(2018) , who find that introducing elements of hierarchical modeling through a stackstructured memory is beneficial for number agreement, outperforming LSTM language models and attention-augmented variants by increasing margins as the number of attractor grows.",
"Further Analysis In order to better interpret the results, we conduct further analysis into the perplexities of each model, followed by a discussion on the effect of incrementality constraints on the RNNG when predicting number agreement.",
"Perplexity.",
"To what extent does the success of RNNGs in the number agreement task with multiple attractors correlate with better performance under the perplexity metric?",
"We answer this question by using an importance sampling marginalization procedure (Dyer et al., 2016) to obtain an estimate of p(x) under both RNNGs and the sequential syntactic LSTM model.",
"Following Dyer et al.",
"(2016) , for each sentence on the validation set we sample 100 candidate trees from a discriminative model 11 as our proposal distribution.",
"As demonstrated in Table 3 , the LSTM language model has the lowest validation set perplexity despite substantially worse performance than RN-NGs in number agreement with multiple attractors, suggesting that lower perplexity is not neces-sarily correlated with number agreement success.",
"Incrementality constraints.",
"As the syntactic prefix was derived from a discriminative model that has access to unprocessed words, one potential concern is that this prefix might violate the incrementality constraints and benefit the RNNG over the LSTM language model.",
"To address this concern, we remark that the empirical evidence from Fig.",
"2 and Table 3 indicates that the LSTM language model without syntactic annotation outperforms the sequential LSTM with syntactic annotation in terms of both perplexity and number agreement throughout nearly all attractor settings, suggesting that the predicted syntactic prefix does not give any unfair advantage to the syntactic models.",
"Furthermore, we run an experiment where the syntactic prefix is instead derived from an incremental beam search procedure of Fried et al.",
"(2017) .",
"12 To this end, we take the highest scoring beam entry at the time that the verb is generated to be the syntactic prefix; this procedure is applied to both the correct and incorrect verb forms.",
"13 We then similarly compare the probabilities of the correct and incorrect verb form given each respective syntactic prefix to obtain number agreement accuracy.",
"Our finding suggests that using the fully incremental tree prefix leads to even better RNNG number agreement performance for four and five attractors, achieving 7.1% and 8.2% error rates, respectively, compared to 9.4% and 12% for the RNNG error rates in Fig.",
"2 .",
"Top-Down, Left-Corner, and Bottom-Up Traversals In this section, we propose two new variants of RNNGs that construct trees using a different con- 12 As the beam search procedure is time-consuming, we randomly sample 500 cases for each attractor and compute the number agreement accuracy on these samples.",
"13 Consequently, the correct and incorrect forms of the sentence might have different partial trees, as the highest scoring beam entries may be different for each alternative.",
"struction order than the top-down, left-to-right order used above.",
"These are a bottom-up construction order ( §4.1) and a left-corner construction order ( §4.2), analogous to the well-known parsing strategies (e.g.",
"Hale, 2014, chapter 3).",
"They differ from these classic strategies insofar as they do not announce the phrase-structural content of an entire branch at the same time, adopting instead a node-by-node enumeration reminescent of Markov Grammars (Charniak, 1997) .",
"This stepby-step arrangement extends to the derived string as well; since all variants generate words from left to right, the models can be compared using number agreement as a diagnostic.",
"14 Here we state our hypothesis on why the build order matters.",
"The three generation strategies represent different chain rule decompositions of the joint probability of strings and phrase-structure trees, thereby imposing different biases on the learner.",
"Earlier work in parsing has characterized the plausibility of top-down, left-corner, and bottom-up strategies as viable candidates of human sentence processing, especially in terms of memory constraints and human difficulties with center embedding constructions (Johnson-Laird, 1983; Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992 , inter alia), along with neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we cast the three strategies as models of language generation (Manning and Carpenter, 1997), and focus on the empirical question: which generation order has the most appropriate bias in modeling non-local structural dependencies in English?",
"These alternative orders organize the learning problem so as to yield intermediate states in generation that condition on different aspects of the grammatical structure.",
"In number agreement, this amounts to making an agreement controller, such as the word flowers in Fig.",
"3 , more or less salient.",
"If it is more salient, the model should be better-able to inflect the main verb in agreement with this controller, without getting distracted by the attractors.",
"The three proposed build orders are compared in Fig.",
"3 , showing the respective configurations (i.e.",
"the prefix) when generating the main verb in a sentence with a single attractor.",
"15 In ad-dition, we show concrete action sequences for a simpler sentence in each section.",
"Bottom-Up Traversal In bottom-up traversals, phrases are recursively constructed and labeled with the nonterminal type once all their daughters have been built, as illustrated in Fig.",
"4 .",
"Bottom-up traversals benefit from shorter stack depths compared to top-down due to the lack of incomplete nonterminals.",
"As the commitment to label the nonterminal type of a phrase is delayed until its constituents are complete, this means that the generation of a child node cannot condition on the label of its parent node.",
"In n-ary branching trees, bottom-up completion of constituents requires a procedure for determining how many of the most recent elements on the stack should be daughters of the node that is being constructed.",
"16 Conceptually, rather than having a single REDUCE operation as we have before, we have a complex REDUCE(X, n) operation that must determine the type of the constituent (i.e., X) as well as the number of daughters (i.e., n).",
"In step 5 of Fig.",
"4 , the newly formed NP constituent only covers the terminal worms, and neither the unattached terminal eats nor the constituent (NP The fox) is part of the new noun phrase.",
"We implement this extent decision using a stick-breaking construction-using the stack LSTM encoding, a single-layer feedforward network, and a logistic output layer-which decides whether the top element on the stack should be the leftmost child of the new constituent (i.e.",
"whether or not the new constituent is complete after popping the current topmost stack element), as illustrated in Fig.",
"5 .",
"If not, the process is then repeated after the topmost stack element is popped.",
"Once the extent of the new nonterminal has been decided, we parameterize the decision of the nonterminal label type; in Fig.",
"5 this is an NP.",
"A second difference to top-down generation is that when a single constituent remains on the stack, the sentence is not necessarily complete (see step 3 of Fig.",
"4 for examples where this happens).",
"We thus introduce an explicit STOP action (step 8, Fig.",
"4) , indicating the tree is complete, which is only assigned non-zero probability when the stack has a during the history of the full generation process vary considerably in the invariances and the kinds of actions they predict.",
"16 This mechanism is not necessary with strictly binary branching trees, since each new nonterminal always consists of the two children at the top of the stack.",
"Left-Corner Traversal Left-corner traversals combine some aspects of top-down and bottom-up processing.",
"As illustrated in Fig.",
"6 , this works by first generating the leftmost terminal of the tree, The (step 0), before proceeding bottom-up to predict its parent NP (step 1) and then top-down to predict the rest of its children (step 2).",
"A REDUCE action similarly calls the composition operator once the phrase is complete (e.g.",
"step 3).",
"The complete constituent (NP The fox) is the leftmost child of its parent node, thus an NT SW(S) action is done next (step 4).",
"The NT SW(X) action is similar to the NT(X) from the top-down generator, in that it introduces an open nonterminal node and must be matched later by a corresponding REDUCE operation, but, in addition, swaps the two topmost elements at the top of the stack.",
"This is necessary because the parent nonterminal node is not built until after its left-most child has been constructed.",
"In step 1 of Fig.",
"6 , with a single element The on the stack, the action NT SW(NP) adds the open nonterminal symbol NP to become the topmost stack element, but after applying the swap operator the stack now contains (NP | The (step 2).",
"Experiments We optimize the hyper-parameters of each RNNG variant using grid searches based on validation set perplexity.",
"Table 4 summarizes average stack depths and perplexities 17 on the Linzen et al.",
"(2016) validation set.",
"We evaluate each of the variants in terms of number agreement accuracy as an evidence of its suitability to model structural dependencies in English, presented in Table 5 .",
"To account for randomness in training, we report the error rate summary statistics of ten different runs.",
"Avg.",
"(±sdev)/min/max n=2 n=3 n=4 LM 5.8(±0.2)/5.5/6.0 9.6(±0.7)/8.8/10.1 14.1(±1.2)/13.0/15.3 TD 5.5(±0.4)/4.9/5.8 7.8(±0.6)/7.4/8.0 8.9(±1.1)/7.9/9.8 LC 5.4(±0.3)/5.2/5.5 8.2(±0.4)/7.9/8.7 9.9(±1.3)/8.8/11.5 BU 5.7(±0.3) 5.5/5.8 8.5(±0.7)/8.0/9.3 9.7(±1.1)/9.0/11.3 Table 5 : Number agreement error rates for topdown (TD), left-corner (LC), and bottom-up (BU) RNNGs, broken down by the number of attractors.",
"LM indicates the best sequential language model baseline ( §2).",
"We report the mean, standard deviation, and minimum/maximum of 10 different random seeds of each model.",
"Discussion.",
"In Table 5 , we focus on empirical results for cases where the structural dependencies matter the most, corresponding to cases with two, three, and four attractors.",
"All three RNNG variants outperform the sequential LSTM language model baseline for these cases.",
"Nevertheless, the top-down variant outperforms both left-corner and bottom-up strategies for difficult cases with three or more attractors, suggesting that the top-down strategy is most appropriately biased to model difficult number agreement dependencies in English.",
"We run an approximate randomization test by stratifying the output and permuting within each stratum (Yeh, 2000) and find that, for four attractors, the performance difference between the top-down RNNG and the other variants is statistically significant at p < 0.05.",
"The success of the top-down traversal in the domain of number-agreement prediction is consistent with a classical view in parsing that argues top-down parsing is the most human-like parsing strategy since it is the most anticipatory.",
"Only anticipatory representations, it is said, could explain the rapid, incremental processing that humans seem to exhibit (Marslen-Wilson, 1973; Tanenhaus et al., 1995) ; this line of thinking similarly motivates Charniak (2010) , among others.",
"While most work in this domain has been concerned with the parsing problem, our findings suggest that anticipatory mechanisms are also beneficial in capturing structural dependencies in language modeling.",
"We note that our results are achieved using models that, in theory, are able to condition on the entire derivation history, while earlier work in sentence processing has focused on cognitive memory considerations, such as the memory-bounded model of Schuler et al.",
"(2010) .",
"Conclusion Given enough capacity, LSTMs trained on language modeling objectives are able to learn syntax-sensitive dependencies, as evidenced by accurate number agreement accuracy with multiple attractors.",
"Despite this strong performance, we discover explicit modeling of structure does improve the model's ability to discover non-local structural dependencies when determining the distribution over subsequent word generation.",
"Recurrent neural network grammars (RNNGs), which jointly model phrase-structure trees and strings and employ an explicit composition operator, substantially outperform LSTM language models and syntactic language models without explicit compositions; this highlights the importance of a hierarchical inductive bias in capturing structural dependencies.",
"We explore the possibility that how the structure is built affects number agreement performance.",
"Through novel extensions to RNNGs that enable the use of left-corner and bottom-up generation strategies, we discover that this is indeed the case: the three RNNG variants have different generalization properties for number agreement, with the top-down traversal strategy performing best for cases with multiple attractors."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"4.3",
"5"
],
"paper_header_content": [
"Introduction",
"Number Agreement with LSTM Language Models",
"Number Agreement with RNNGs",
"Recurrent Neural Network Grammars",
"Experiments",
"Further Analysis",
"Top-Down, Left-Corner, and Bottom-Up Traversals",
"Bottom-Up Traversal",
"Left-Corner Traversal",
"Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-30#paper-1041#slide-7 | Larger LSTM Number Agreement Error Rates | Capacity matters for capturing non-local structural dependencies
Despite this, relatively minor perplexity
LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | Capacity matters for capturing non-local structural dependencies
Despite this, relatively minor perplexity
LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | [] |
GEM-SciDuet-train-30#paper-1041#slide-8 | 1041 | LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modeling Structure Makes Them Better | Language exhibits hierarchical structure, but recent work using a subject-verb agreement diagnostic argued that state-ofthe-art language models, LSTMs, fail to learn long-range syntax-sensitive dependencies. Using the same diagnostic, we show that, in fact, LSTMs do succeed in learning such dependencies-provided they have enough capacity. We then explore whether models that have access to explicit syntactic information learn agreement more effectively, and how the way in which this structural information is incorporated into the model impacts performance. We find that the mere presence of syntactic information does not improve accuracy, but when model architecture is determined by syntax, number agreement is improved. Further, we find that the choice of how syntactic structure is built affects how well number agreement is learned: top-down construction outperforms leftcorner and bottom-up variants in capturing long-distance structural dependencies. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189
],
"paper_content_text": [
"Introduction Recurrent neural networks (RNNs) are remarkably effective models of sequential data.",
"Recent years have witnessed the widespread adoption of recurrent architectures such as LSTMs (Hochreiter and Schmidhuber, 1997) in various NLP tasks, with state of the art results in language modeling and conditional generation tasks like machine translation (Bahdanau et al., 2015) and text summarization (See et al., 2017) .",
"Here we revisit the question asked by Linzen et al.",
"(2016) : as RNNs model word sequences without explicit notions of hierarchical structure, Figure 1 : An example of the number agreement task with two attractors and a subject-verb distance of five.",
"to what extent are these models able to learn non-local syntactic dependencies in natural language?",
"Identifying number agreement between subjects and verbs-especially in the presence of attractors-can be understood as a cognitivelymotivated probe that seeks to distinguish hierarchical theories from sequential ones, as models that rely on sequential cues like the most recent noun would favor the incorrect verb form.",
"We provide an example of this task in Fig.",
"1 , where the plural form of the verb have agrees with the distant subject parts, rather than the adjacent attractors (underlined) of the singular form.",
"Contrary to the findings of Linzen et al.",
"(2016) , our experiments suggest that sequential LSTMs are able to capture structural dependencies to a large extent, even for cases with multiple attractors ( §2).",
"Our finding suggests that network capacity plays a crucial role in capturing structural dependencies with multiple attractors.",
"Nevertheless, we find that a strong character LSTM language model-which lacks explicit word representation and has to capture much longer sequential dependencies in order to learn non-local structural dependencies effectively-performs much worse in the number agreement task.",
"Given the strong performance of word-based LSTM language models, are there are any substantial benefits, in terms of number agreement accuracy, to explicitly modeling hierarchical structures as an inductive bias?",
"We discover that a certain class of LSTM language models that explicitly models syntactic structures, the recurrent neural network grammars (Dyer et al., 2016, RN-NGs) , considerably outperforms sequential LSTM language models for cases with multiple attractors ( §3).",
"We present experiments affirming that this gain is due to an explicit composition operator rather than the presence of predicted syntactic annotations.",
"Rather surprisingly, syntactic LSTM language models without explicit composition have no advantage over sequential LSTMs that operate on word sequences, although these models can nevertheless be excellent predictors of phrase structures (Choe and Charniak, 2016) .",
"Having established the importance of modeling structures, we explore the hypothesis that how we build the structure affects the model's ability to identify structural dependencies in English.",
"As RNNGs build phrase-structure trees through top-down operations, we propose extensions to the structure-building sequences and model architecture that enable left-corner (Henderson, 2003 (Henderson, , 2004 and bottom-up (Chelba and Jelinek, 2000; Emami and Jelinek, 2005) generation orders ( §4).",
"Extensive prior work has characterized topdown, left-corner, and bottom-up parsing strategies in terms of cognitive plausibility (Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992) and neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we move away from the realm of parsing and evaluate the three strategies as models of generation instead, and address the following empirical question: which generation order is most appropriately biased to model structural dependencies in English, as indicated by number agreement accuracy?",
"Our key finding is that the top-down generation outperforms left-corner and bottom-up variants for difficult cases with multiple attractors.",
"In theory, the three traversal strategies approximate the same chain rule that decompose the joint probability of words and phrase-structure trees, denoted as p(x, y), differently and as such will impose different biases on the learner.",
"In §4.3, we show that the three variants achieve similar perplexities on a held-out validation set.",
"As we observe different patterns in number agreement, this demonstrates that while perplexity can be a useful diagnostic tool, it may not be sensitive enough for comparing models in terms of how well they capture grammatical intuitions.",
"Number Agreement with LSTM Language Models We revisit the number agreement task with LSTMs trained on language modeling objectives, as proposed by Linzen et al.",
"(2016) .",
"Experimental Settings.",
"We use the same parsed Wikipedia corpus, verb inflectors, preprocessing steps, and dataset split as Linzen et al.",
"(2016) .",
"1 Word types beyond the most frequent 10,000 are converted to their respective POS tags.",
"We summarize the corpus statistics of the dataset, along with the test set distribution of the number of attractors, in Table 1 .",
"Similar to Linzen et al.",
"(2016) , we only include test cases where all intervening nouns are of the opposite number forms than the subject noun.",
"All models are implemented using the DyNet library (Neubig et al., 2017) .",
"Training was done using a language modeling objective that predicts the next word given the prefix; at test time we compute agreement error rates by comparing the probability of the correct verb form with the incorrect one.",
"We report performance of a few different LSTM hidden layer configurations, while other hyper-parameters are selected based on a grid search.",
"2 Following Linzen Table 2 indicates that, given enough capacity, LSTM language models without explicit syntactic supervision are able to perform well in number agreement.",
"For cases with multiple attractors, we observe that the LSTM language model with 50 hidden units trails behind its larger counterparts by a substantial margin despite comparable performance for zero attractor cases, suggesting that network capacity plays an especially important role in propagating relevant structural information across a large number of steps.",
"5 Our experiment independently derives the same finding as the recent work of Gulordava et al.",
"(2018) , who also find that LSTMs trained with language modeling objectives are able to learn number agreement well; here we additionally identify model capacity as one of the reasons for the discrepancy with the Linzen et al.",
"(2016) results.",
"While the pretrained large-scale language model of Jozefowicz et al.",
"(2016) has certain advantages in terms of model capacity, more training data, and richer vocabulary, we suspect that the poorer performance is due to differences between their training domain and the number agreement testing domain, although the model still performs reasonably well in the number agreement test set.",
"Prior work has confirmed the notion that, in many cases, statistical models are able to achieve good performance under some aggregate metric by overfitting to patterns that are predictive in most cases, often at the expense of more difficult, infrequent instances that require deeper language understanding abilities (Rimell et al., 2009; Jia and Liang, 2017) .",
"In the vast majority of cases, structural dependencies between subjects and verbs highly overlap with sequential dependencies (Table 1).",
"Nevertheless, the fact that number agreement accuracy gets worse as the number of attractors increases is consistent with a sequential recency bias in LSTMs: under this conjecture, identifying the correct structural dependency becomes harder when there are more adjacent nouns of different number forms than the true subject.",
"If the sequential recency conjecture is correct, then LSTMs would perform worse when the structural dependency is more distant in the sequences, compared to cases where the structural dependency is more adjacent.",
"We empirically test this conjecture by running a strong character-based LSTM language model of that achieved state of the art results on EnWiki8 from the Hutter Prize dataset (Hutter, 2012) , with 1,800 hidden units and 10 million parameters.",
"The character LSTM is trained, validated, and tested 6 on the same split of the Linzen et al.",
"(2016) number agreement dataset.",
"A priori, we expect that number agreement is harder for character LSTMs for two reasons.",
"First, character LSTMs lack explicit word representaforms much better for cases with multiple attractors.",
"tions, thus succeeding in this task requires identifying structural dependencies between two sequences of character tokens, while word-based LSTMs only need to resolve dependencies between word tokens.",
"Second, by nature of modeling characters, non-local structural dependencies are sequentially further apart than in the wordbased language model.",
"On the other hand, character LSTMs have the ability to exploit and share informative morphological cues, such as the fact that plural nouns in English tend to end with 's'.",
"As demonstrated on the last row of Table 2 , we find that the character LSTM language model performs much worse at number agreement with multiple attractors compared to its word-based counterparts.",
"This finding is consistent with that of Sennrich (2017) , who find that character-level decoders in neural machine translation perform worse than subword models in capturing morphosyntactic agreement.",
"To some extent, our finding demonstrates the limitations that character LSTMs face in learning structure from language modeling objectives, despite earlier evidence that character LSTM language models are able to implicitly acquire a lexicon (Le Godais et al., 2017) .",
"Number Agreement with RNNGs Given the strong performance of sequential LSTMs in number agreement, is there any further benefit to explicitly modeling hierarchical structures?",
"We focus on recurrent neural network grammars (Dyer et al., 2016, RNNGs) , which jointly model the probability of phrase-structure trees and strings, p(x, y), through structurebuilding actions and explicit compositions for representing completed constituents.",
"Our choice of RNNGs is motivated by the findings of Kuncoro et al.",
"(2017) , who find evidence for syntactic headedness in RNNG phrasal representations.",
"Intuitively, the ability to learn heads is beneficial for this task, as the representation for the noun phrase \"The flowers in the vase\" would be similar to the syntactic head flowers rather than vase.",
"In some sense, the composition operator can be understood as injecting a structural recency bias into the model design, as subjects and verbs that are sequentially apart are encouraged to be close together in the RNNGs' representation.",
"Recurrent Neural Network Grammars RNNGs (Dyer et al., 2016) are language models that estimate the joint probability of string terminals and phrase-structure tree nonterminals.",
"Here we use stack-only RNNGs that achieve better perplexity and parsing performance (Kuncoro et al., 2017) .",
"Given the current stack configuration, the objective function of RNNGs is to predict the correct structure-building operation according to a top-down, left-to-right traversal of the phrasestructure tree; a partial traversal for the input sentence \"The flowers in the vase are blooming\" is illustrated in Fig.",
"3(a) .",
"7 The structural inductive bias of RNNGs derives from an explicit composition operator that represents completed constituents; for instance, the constituent (NP The flowers) is represented by a single composite element on the stack, rather than as four separate symbols.",
"During each REDUCE action, the topmost stack elements that belong to the new constituent are popped from the stack and then composed by the composition function; the composed symbol is then pushed back into the stack.",
"The model is trained in an end-to-end manner by minimizing the cross-entropy loss relative to a sample of gold trees.",
"Experiments Here we summarize the experimental settings of running RNNGs on the number agreement dataset and discuss the empirical findings.",
"Experimental settings.",
"We obtain phrasestructure trees for the Linzen et al.",
"(2016) dataset using a publicly available discriminative model 8 trained on the Penn Treebank (Marcus et al., 1993) .",
"At training time, we use these predicted trees to derive action sequences on the training set, and train the RNNG model on these sequences.",
"9 At test time, we compare the probabilities of the correct and incorrect verb forms given the prefix, which now includes both nonterminal and terminal symbols.",
"An example of the stack contents (i.e.",
"the prefix) when predicting the verb is provided in Fig.",
"3(a) .",
"We similarly run a grid search over the same hyper-parameter range as the sequential LSTM and compare the results with the strongest sequential LSTM baseline from §2.",
"Figure 2 : Number agreement error rates for sequential LSTM language models (left), sequential syntactic LSTM language models (Choe and Charniak, 2016, center), and RNNGs (right).",
"Discussion.",
"Fig.",
"2 shows that RNNGs (rightmost) achieve much better number agreement accuracy compared to LSTM language models (leftmost) for difficult cases with four and five attractors, with around 30% error rate reductions, along with a 13% error rate reduction (from 9% to 7.8%) for three attractors.",
"We attribute the slightly worse performance of RNNGs on cases with zero and one attractor to the presence of intervening structure-building actions that separate the subject and the verb, such as a REDUCE (step 6 in Fig.",
"3(a) ) action to complete the noun phrase and at least one action to predict a verb phrase (step 15 in Fig.",
"3(a) ) before the verb itself is introduced, while LSTM language models benefit from shorter dependencies for zero and one attractor cases.",
"The performance gain of RNNGs might arise from two potential causes.",
"First, RNNGs have access to predicted syntactic annotations, while LSTM language models operate solely on word sequences.",
"Second, RNNGs incorporate explicit compositions, which encourage hierarhical representations and potentially the discovery of syntactic (rather than sequential) dependencies.",
"Would LSTMs that have access to syntactic annotations, but without the explicit composition function, benefit from the same performance gain as RNNGs?",
"To answer this question, we run sequential LSTMs over the same phrase-structure trees (Choe and Charniak, 2016) , similarly estimating the joint probability of phrase-structure nonterminals and string terminals but without an explicit composition operator.",
"Taking the example in Fig.",
"3(a) , the sequential syntactic LSTM would have fifteen 10 symbols on the LSTM when predicting the verb, as opposed to three symbols in the case of RNNGs' stack LSTM.",
"In theory, the sequential LSTM over the phrase-structure trees (Choe and Charniak, 2016) may be able to incorporate a similar, albeit implicit, composition process as RNNGs and consequently derive similarly syntactic heads, although there is no inductive bias that explicitly encourages such process.",
"Fig.",
"2 suggests that the sequential syntactic LSTMs (center) perform comparably with sequential LSTMs without syntax for multiple attractor cases, and worse than RNNGs for nearly all attractors; the gap is highest for multiple attractors.",
"This result showcases the importance of an explicit composition operator and hierarchical representations in identifying structural dependencies, as indicated by number agreement accuracy.",
"Our finding is consistent with the recent work of Yogatama et al.",
"(2018) , who find that introducing elements of hierarchical modeling through a stackstructured memory is beneficial for number agreement, outperforming LSTM language models and attention-augmented variants by increasing margins as the number of attractor grows.",
"Further Analysis In order to better interpret the results, we conduct further analysis into the perplexities of each model, followed by a discussion on the effect of incrementality constraints on the RNNG when predicting number agreement.",
"Perplexity.",
"To what extent does the success of RNNGs in the number agreement task with multiple attractors correlate with better performance under the perplexity metric?",
"We answer this question by using an importance sampling marginalization procedure (Dyer et al., 2016) to obtain an estimate of p(x) under both RNNGs and the sequential syntactic LSTM model.",
"Following Dyer et al.",
"(2016) , for each sentence on the validation set we sample 100 candidate trees from a discriminative model 11 as our proposal distribution.",
"As demonstrated in Table 3 , the LSTM language model has the lowest validation set perplexity despite substantially worse performance than RN-NGs in number agreement with multiple attractors, suggesting that lower perplexity is not neces-sarily correlated with number agreement success.",
"Incrementality constraints.",
"As the syntactic prefix was derived from a discriminative model that has access to unprocessed words, one potential concern is that this prefix might violate the incrementality constraints and benefit the RNNG over the LSTM language model.",
"To address this concern, we remark that the empirical evidence from Fig.",
"2 and Table 3 indicates that the LSTM language model without syntactic annotation outperforms the sequential LSTM with syntactic annotation in terms of both perplexity and number agreement throughout nearly all attractor settings, suggesting that the predicted syntactic prefix does not give any unfair advantage to the syntactic models.",
"Furthermore, we run an experiment where the syntactic prefix is instead derived from an incremental beam search procedure of Fried et al.",
"(2017) .",
"12 To this end, we take the highest scoring beam entry at the time that the verb is generated to be the syntactic prefix; this procedure is applied to both the correct and incorrect verb forms.",
"13 We then similarly compare the probabilities of the correct and incorrect verb form given each respective syntactic prefix to obtain number agreement accuracy.",
"Our finding suggests that using the fully incremental tree prefix leads to even better RNNG number agreement performance for four and five attractors, achieving 7.1% and 8.2% error rates, respectively, compared to 9.4% and 12% for the RNNG error rates in Fig.",
"2 .",
"Top-Down, Left-Corner, and Bottom-Up Traversals In this section, we propose two new variants of RNNGs that construct trees using a different con- 12 As the beam search procedure is time-consuming, we randomly sample 500 cases for each attractor and compute the number agreement accuracy on these samples.",
"13 Consequently, the correct and incorrect forms of the sentence might have different partial trees, as the highest scoring beam entries may be different for each alternative.",
"struction order than the top-down, left-to-right order used above.",
"These are a bottom-up construction order ( §4.1) and a left-corner construction order ( §4.2), analogous to the well-known parsing strategies (e.g.",
"Hale, 2014, chapter 3).",
"They differ from these classic strategies insofar as they do not announce the phrase-structural content of an entire branch at the same time, adopting instead a node-by-node enumeration reminescent of Markov Grammars (Charniak, 1997) .",
"This stepby-step arrangement extends to the derived string as well; since all variants generate words from left to right, the models can be compared using number agreement as a diagnostic.",
"14 Here we state our hypothesis on why the build order matters.",
"The three generation strategies represent different chain rule decompositions of the joint probability of strings and phrase-structure trees, thereby imposing different biases on the learner.",
"Earlier work in parsing has characterized the plausibility of top-down, left-corner, and bottom-up strategies as viable candidates of human sentence processing, especially in terms of memory constraints and human difficulties with center embedding constructions (Johnson-Laird, 1983; Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992 , inter alia), along with neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we cast the three strategies as models of language generation (Manning and Carpenter, 1997), and focus on the empirical question: which generation order has the most appropriate bias in modeling non-local structural dependencies in English?",
"These alternative orders organize the learning problem so as to yield intermediate states in generation that condition on different aspects of the grammatical structure.",
"In number agreement, this amounts to making an agreement controller, such as the word flowers in Fig.",
"3 , more or less salient.",
"If it is more salient, the model should be better-able to inflect the main verb in agreement with this controller, without getting distracted by the attractors.",
"The three proposed build orders are compared in Fig.",
"3 , showing the respective configurations (i.e.",
"the prefix) when generating the main verb in a sentence with a single attractor.",
"15 In ad-dition, we show concrete action sequences for a simpler sentence in each section.",
"Bottom-Up Traversal In bottom-up traversals, phrases are recursively constructed and labeled with the nonterminal type once all their daughters have been built, as illustrated in Fig.",
"4 .",
"Bottom-up traversals benefit from shorter stack depths compared to top-down due to the lack of incomplete nonterminals.",
"As the commitment to label the nonterminal type of a phrase is delayed until its constituents are complete, this means that the generation of a child node cannot condition on the label of its parent node.",
"In n-ary branching trees, bottom-up completion of constituents requires a procedure for determining how many of the most recent elements on the stack should be daughters of the node that is being constructed.",
"16 Conceptually, rather than having a single REDUCE operation as we have before, we have a complex REDUCE(X, n) operation that must determine the type of the constituent (i.e., X) as well as the number of daughters (i.e., n).",
"In step 5 of Fig.",
"4 , the newly formed NP constituent only covers the terminal worms, and neither the unattached terminal eats nor the constituent (NP The fox) is part of the new noun phrase.",
"We implement this extent decision using a stick-breaking construction-using the stack LSTM encoding, a single-layer feedforward network, and a logistic output layer-which decides whether the top element on the stack should be the leftmost child of the new constituent (i.e.",
"whether or not the new constituent is complete after popping the current topmost stack element), as illustrated in Fig.",
"5 .",
"If not, the process is then repeated after the topmost stack element is popped.",
"Once the extent of the new nonterminal has been decided, we parameterize the decision of the nonterminal label type; in Fig.",
"5 this is an NP.",
"A second difference to top-down generation is that when a single constituent remains on the stack, the sentence is not necessarily complete (see step 3 of Fig.",
"4 for examples where this happens).",
"We thus introduce an explicit STOP action (step 8, Fig.",
"4) , indicating the tree is complete, which is only assigned non-zero probability when the stack has a during the history of the full generation process vary considerably in the invariances and the kinds of actions they predict.",
"16 This mechanism is not necessary with strictly binary branching trees, since each new nonterminal always consists of the two children at the top of the stack.",
"Left-Corner Traversal Left-corner traversals combine some aspects of top-down and bottom-up processing.",
"As illustrated in Fig.",
"6 , this works by first generating the leftmost terminal of the tree, The (step 0), before proceeding bottom-up to predict its parent NP (step 1) and then top-down to predict the rest of its children (step 2).",
"A REDUCE action similarly calls the composition operator once the phrase is complete (e.g.",
"step 3).",
"The complete constituent (NP The fox) is the leftmost child of its parent node, thus an NT SW(S) action is done next (step 4).",
"The NT SW(X) action is similar to the NT(X) from the top-down generator, in that it introduces an open nonterminal node and must be matched later by a corresponding REDUCE operation, but, in addition, swaps the two topmost elements at the top of the stack.",
"This is necessary because the parent nonterminal node is not built until after its left-most child has been constructed.",
"In step 1 of Fig.",
"6 , with a single element The on the stack, the action NT SW(NP) adds the open nonterminal symbol NP to become the topmost stack element, but after applying the swap operator the stack now contains (NP | The (step 2).",
"Experiments We optimize the hyper-parameters of each RNNG variant using grid searches based on validation set perplexity.",
"Table 4 summarizes average stack depths and perplexities 17 on the Linzen et al.",
"(2016) validation set.",
"We evaluate each of the variants in terms of number agreement accuracy as an evidence of its suitability to model structural dependencies in English, presented in Table 5 .",
"To account for randomness in training, we report the error rate summary statistics of ten different runs.",
"Avg.",
"(±sdev)/min/max n=2 n=3 n=4 LM 5.8(±0.2)/5.5/6.0 9.6(±0.7)/8.8/10.1 14.1(±1.2)/13.0/15.3 TD 5.5(±0.4)/4.9/5.8 7.8(±0.6)/7.4/8.0 8.9(±1.1)/7.9/9.8 LC 5.4(±0.3)/5.2/5.5 8.2(±0.4)/7.9/8.7 9.9(±1.3)/8.8/11.5 BU 5.7(±0.3) 5.5/5.8 8.5(±0.7)/8.0/9.3 9.7(±1.1)/9.0/11.3 Table 5 : Number agreement error rates for topdown (TD), left-corner (LC), and bottom-up (BU) RNNGs, broken down by the number of attractors.",
"LM indicates the best sequential language model baseline ( §2).",
"We report the mean, standard deviation, and minimum/maximum of 10 different random seeds of each model.",
"Discussion.",
"In Table 5 , we focus on empirical results for cases where the structural dependencies matter the most, corresponding to cases with two, three, and four attractors.",
"All three RNNG variants outperform the sequential LSTM language model baseline for these cases.",
"Nevertheless, the top-down variant outperforms both left-corner and bottom-up strategies for difficult cases with three or more attractors, suggesting that the top-down strategy is most appropriately biased to model difficult number agreement dependencies in English.",
"We run an approximate randomization test by stratifying the output and permuting within each stratum (Yeh, 2000) and find that, for four attractors, the performance difference between the top-down RNNG and the other variants is statistically significant at p < 0.05.",
"The success of the top-down traversal in the domain of number-agreement prediction is consistent with a classical view in parsing that argues top-down parsing is the most human-like parsing strategy since it is the most anticipatory.",
"Only anticipatory representations, it is said, could explain the rapid, incremental processing that humans seem to exhibit (Marslen-Wilson, 1973; Tanenhaus et al., 1995) ; this line of thinking similarly motivates Charniak (2010) , among others.",
"While most work in this domain has been concerned with the parsing problem, our findings suggest that anticipatory mechanisms are also beneficial in capturing structural dependencies in language modeling.",
"We note that our results are achieved using models that, in theory, are able to condition on the entire derivation history, while earlier work in sentence processing has focused on cognitive memory considerations, such as the memory-bounded model of Schuler et al.",
"(2010) .",
"Conclusion Given enough capacity, LSTMs trained on language modeling objectives are able to learn syntax-sensitive dependencies, as evidenced by accurate number agreement accuracy with multiple attractors.",
"Despite this strong performance, we discover explicit modeling of structure does improve the model's ability to discover non-local structural dependencies when determining the distribution over subsequent word generation.",
"Recurrent neural network grammars (RNNGs), which jointly model phrase-structure trees and strings and employ an explicit composition operator, substantially outperform LSTM language models and syntactic language models without explicit compositions; this highlights the importance of a hierarchical inductive bias in capturing structural dependencies.",
"We explore the possibility that how the structure is built affects number agreement performance.",
"Through novel extensions to RNNGs that enable the use of left-corner and bottom-up generation strategies, we discover that this is indeed the case: the three RNNG variants have different generalization properties for number agreement, with the top-down traversal strategy performing best for cases with multiple attractors."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"4.3",
"5"
],
"paper_header_content": [
"Introduction",
"Number Agreement with LSTM Language Models",
"Number Agreement with RNNGs",
"Recurrent Neural Network Grammars",
"Experiments",
"Further Analysis",
"Top-Down, Left-Corner, and Bottom-Up Traversals",
"Bottom-Up Traversal",
"Left-Corner Traversal",
"Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-30#paper-1041#slide-8 | LSTM Number Agreement Error Rates | Capacity and size of training corpus are not the full story
Domain and training settings matter too
LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | Capacity and size of training corpus are not the full story
Domain and training settings matter too
LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | [] |
GEM-SciDuet-train-30#paper-1041#slide-9 | 1041 | LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modeling Structure Makes Them Better | Language exhibits hierarchical structure, but recent work using a subject-verb agreement diagnostic argued that state-ofthe-art language models, LSTMs, fail to learn long-range syntax-sensitive dependencies. Using the same diagnostic, we show that, in fact, LSTMs do succeed in learning such dependencies-provided they have enough capacity. We then explore whether models that have access to explicit syntactic information learn agreement more effectively, and how the way in which this structural information is incorporated into the model impacts performance. We find that the mere presence of syntactic information does not improve accuracy, but when model architecture is determined by syntax, number agreement is improved. Further, we find that the choice of how syntactic structure is built affects how well number agreement is learned: top-down construction outperforms leftcorner and bottom-up variants in capturing long-distance structural dependencies. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189
],
"paper_content_text": [
"Introduction Recurrent neural networks (RNNs) are remarkably effective models of sequential data.",
"Recent years have witnessed the widespread adoption of recurrent architectures such as LSTMs (Hochreiter and Schmidhuber, 1997) in various NLP tasks, with state of the art results in language modeling and conditional generation tasks like machine translation (Bahdanau et al., 2015) and text summarization (See et al., 2017) .",
"Here we revisit the question asked by Linzen et al.",
"(2016) : as RNNs model word sequences without explicit notions of hierarchical structure, Figure 1 : An example of the number agreement task with two attractors and a subject-verb distance of five.",
"to what extent are these models able to learn non-local syntactic dependencies in natural language?",
"Identifying number agreement between subjects and verbs-especially in the presence of attractors-can be understood as a cognitivelymotivated probe that seeks to distinguish hierarchical theories from sequential ones, as models that rely on sequential cues like the most recent noun would favor the incorrect verb form.",
"We provide an example of this task in Fig.",
"1 , where the plural form of the verb have agrees with the distant subject parts, rather than the adjacent attractors (underlined) of the singular form.",
"Contrary to the findings of Linzen et al.",
"(2016) , our experiments suggest that sequential LSTMs are able to capture structural dependencies to a large extent, even for cases with multiple attractors ( §2).",
"Our finding suggests that network capacity plays a crucial role in capturing structural dependencies with multiple attractors.",
"Nevertheless, we find that a strong character LSTM language model-which lacks explicit word representation and has to capture much longer sequential dependencies in order to learn non-local structural dependencies effectively-performs much worse in the number agreement task.",
"Given the strong performance of word-based LSTM language models, are there are any substantial benefits, in terms of number agreement accuracy, to explicitly modeling hierarchical structures as an inductive bias?",
"We discover that a certain class of LSTM language models that explicitly models syntactic structures, the recurrent neural network grammars (Dyer et al., 2016, RN-NGs) , considerably outperforms sequential LSTM language models for cases with multiple attractors ( §3).",
"We present experiments affirming that this gain is due to an explicit composition operator rather than the presence of predicted syntactic annotations.",
"Rather surprisingly, syntactic LSTM language models without explicit composition have no advantage over sequential LSTMs that operate on word sequences, although these models can nevertheless be excellent predictors of phrase structures (Choe and Charniak, 2016) .",
"Having established the importance of modeling structures, we explore the hypothesis that how we build the structure affects the model's ability to identify structural dependencies in English.",
"As RNNGs build phrase-structure trees through top-down operations, we propose extensions to the structure-building sequences and model architecture that enable left-corner (Henderson, 2003 (Henderson, , 2004 and bottom-up (Chelba and Jelinek, 2000; Emami and Jelinek, 2005) generation orders ( §4).",
"Extensive prior work has characterized topdown, left-corner, and bottom-up parsing strategies in terms of cognitive plausibility (Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992) and neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we move away from the realm of parsing and evaluate the three strategies as models of generation instead, and address the following empirical question: which generation order is most appropriately biased to model structural dependencies in English, as indicated by number agreement accuracy?",
"Our key finding is that the top-down generation outperforms left-corner and bottom-up variants for difficult cases with multiple attractors.",
"In theory, the three traversal strategies approximate the same chain rule that decompose the joint probability of words and phrase-structure trees, denoted as p(x, y), differently and as such will impose different biases on the learner.",
"In §4.3, we show that the three variants achieve similar perplexities on a held-out validation set.",
"As we observe different patterns in number agreement, this demonstrates that while perplexity can be a useful diagnostic tool, it may not be sensitive enough for comparing models in terms of how well they capture grammatical intuitions.",
"Number Agreement with LSTM Language Models We revisit the number agreement task with LSTMs trained on language modeling objectives, as proposed by Linzen et al.",
"(2016) .",
"Experimental Settings.",
"We use the same parsed Wikipedia corpus, verb inflectors, preprocessing steps, and dataset split as Linzen et al.",
"(2016) .",
"1 Word types beyond the most frequent 10,000 are converted to their respective POS tags.",
"We summarize the corpus statistics of the dataset, along with the test set distribution of the number of attractors, in Table 1 .",
"Similar to Linzen et al.",
"(2016) , we only include test cases where all intervening nouns are of the opposite number forms than the subject noun.",
"All models are implemented using the DyNet library (Neubig et al., 2017) .",
"Training was done using a language modeling objective that predicts the next word given the prefix; at test time we compute agreement error rates by comparing the probability of the correct verb form with the incorrect one.",
"We report performance of a few different LSTM hidden layer configurations, while other hyper-parameters are selected based on a grid search.",
"2 Following Linzen Table 2 indicates that, given enough capacity, LSTM language models without explicit syntactic supervision are able to perform well in number agreement.",
"For cases with multiple attractors, we observe that the LSTM language model with 50 hidden units trails behind its larger counterparts by a substantial margin despite comparable performance for zero attractor cases, suggesting that network capacity plays an especially important role in propagating relevant structural information across a large number of steps.",
"5 Our experiment independently derives the same finding as the recent work of Gulordava et al.",
"(2018) , who also find that LSTMs trained with language modeling objectives are able to learn number agreement well; here we additionally identify model capacity as one of the reasons for the discrepancy with the Linzen et al.",
"(2016) results.",
"While the pretrained large-scale language model of Jozefowicz et al.",
"(2016) has certain advantages in terms of model capacity, more training data, and richer vocabulary, we suspect that the poorer performance is due to differences between their training domain and the number agreement testing domain, although the model still performs reasonably well in the number agreement test set.",
"Prior work has confirmed the notion that, in many cases, statistical models are able to achieve good performance under some aggregate metric by overfitting to patterns that are predictive in most cases, often at the expense of more difficult, infrequent instances that require deeper language understanding abilities (Rimell et al., 2009; Jia and Liang, 2017) .",
"In the vast majority of cases, structural dependencies between subjects and verbs highly overlap with sequential dependencies (Table 1).",
"Nevertheless, the fact that number agreement accuracy gets worse as the number of attractors increases is consistent with a sequential recency bias in LSTMs: under this conjecture, identifying the correct structural dependency becomes harder when there are more adjacent nouns of different number forms than the true subject.",
"If the sequential recency conjecture is correct, then LSTMs would perform worse when the structural dependency is more distant in the sequences, compared to cases where the structural dependency is more adjacent.",
"We empirically test this conjecture by running a strong character-based LSTM language model of that achieved state of the art results on EnWiki8 from the Hutter Prize dataset (Hutter, 2012) , with 1,800 hidden units and 10 million parameters.",
"The character LSTM is trained, validated, and tested 6 on the same split of the Linzen et al.",
"(2016) number agreement dataset.",
"A priori, we expect that number agreement is harder for character LSTMs for two reasons.",
"First, character LSTMs lack explicit word representaforms much better for cases with multiple attractors.",
"tions, thus succeeding in this task requires identifying structural dependencies between two sequences of character tokens, while word-based LSTMs only need to resolve dependencies between word tokens.",
"Second, by nature of modeling characters, non-local structural dependencies are sequentially further apart than in the wordbased language model.",
"On the other hand, character LSTMs have the ability to exploit and share informative morphological cues, such as the fact that plural nouns in English tend to end with 's'.",
"As demonstrated on the last row of Table 2 , we find that the character LSTM language model performs much worse at number agreement with multiple attractors compared to its word-based counterparts.",
"This finding is consistent with that of Sennrich (2017) , who find that character-level decoders in neural machine translation perform worse than subword models in capturing morphosyntactic agreement.",
"To some extent, our finding demonstrates the limitations that character LSTMs face in learning structure from language modeling objectives, despite earlier evidence that character LSTM language models are able to implicitly acquire a lexicon (Le Godais et al., 2017) .",
"Number Agreement with RNNGs Given the strong performance of sequential LSTMs in number agreement, is there any further benefit to explicitly modeling hierarchical structures?",
"We focus on recurrent neural network grammars (Dyer et al., 2016, RNNGs) , which jointly model the probability of phrase-structure trees and strings, p(x, y), through structurebuilding actions and explicit compositions for representing completed constituents.",
"Our choice of RNNGs is motivated by the findings of Kuncoro et al.",
"(2017) , who find evidence for syntactic headedness in RNNG phrasal representations.",
"Intuitively, the ability to learn heads is beneficial for this task, as the representation for the noun phrase \"The flowers in the vase\" would be similar to the syntactic head flowers rather than vase.",
"In some sense, the composition operator can be understood as injecting a structural recency bias into the model design, as subjects and verbs that are sequentially apart are encouraged to be close together in the RNNGs' representation.",
"Recurrent Neural Network Grammars RNNGs (Dyer et al., 2016) are language models that estimate the joint probability of string terminals and phrase-structure tree nonterminals.",
"Here we use stack-only RNNGs that achieve better perplexity and parsing performance (Kuncoro et al., 2017) .",
"Given the current stack configuration, the objective function of RNNGs is to predict the correct structure-building operation according to a top-down, left-to-right traversal of the phrasestructure tree; a partial traversal for the input sentence \"The flowers in the vase are blooming\" is illustrated in Fig.",
"3(a) .",
"7 The structural inductive bias of RNNGs derives from an explicit composition operator that represents completed constituents; for instance, the constituent (NP The flowers) is represented by a single composite element on the stack, rather than as four separate symbols.",
"During each REDUCE action, the topmost stack elements that belong to the new constituent are popped from the stack and then composed by the composition function; the composed symbol is then pushed back into the stack.",
"The model is trained in an end-to-end manner by minimizing the cross-entropy loss relative to a sample of gold trees.",
"Experiments Here we summarize the experimental settings of running RNNGs on the number agreement dataset and discuss the empirical findings.",
"Experimental settings.",
"We obtain phrasestructure trees for the Linzen et al.",
"(2016) dataset using a publicly available discriminative model 8 trained on the Penn Treebank (Marcus et al., 1993) .",
"At training time, we use these predicted trees to derive action sequences on the training set, and train the RNNG model on these sequences.",
"9 At test time, we compare the probabilities of the correct and incorrect verb forms given the prefix, which now includes both nonterminal and terminal symbols.",
"An example of the stack contents (i.e.",
"the prefix) when predicting the verb is provided in Fig.",
"3(a) .",
"We similarly run a grid search over the same hyper-parameter range as the sequential LSTM and compare the results with the strongest sequential LSTM baseline from §2.",
"Figure 2 : Number agreement error rates for sequential LSTM language models (left), sequential syntactic LSTM language models (Choe and Charniak, 2016, center), and RNNGs (right).",
"Discussion.",
"Fig.",
"2 shows that RNNGs (rightmost) achieve much better number agreement accuracy compared to LSTM language models (leftmost) for difficult cases with four and five attractors, with around 30% error rate reductions, along with a 13% error rate reduction (from 9% to 7.8%) for three attractors.",
"We attribute the slightly worse performance of RNNGs on cases with zero and one attractor to the presence of intervening structure-building actions that separate the subject and the verb, such as a REDUCE (step 6 in Fig.",
"3(a) ) action to complete the noun phrase and at least one action to predict a verb phrase (step 15 in Fig.",
"3(a) ) before the verb itself is introduced, while LSTM language models benefit from shorter dependencies for zero and one attractor cases.",
"The performance gain of RNNGs might arise from two potential causes.",
"First, RNNGs have access to predicted syntactic annotations, while LSTM language models operate solely on word sequences.",
"Second, RNNGs incorporate explicit compositions, which encourage hierarhical representations and potentially the discovery of syntactic (rather than sequential) dependencies.",
"Would LSTMs that have access to syntactic annotations, but without the explicit composition function, benefit from the same performance gain as RNNGs?",
"To answer this question, we run sequential LSTMs over the same phrase-structure trees (Choe and Charniak, 2016) , similarly estimating the joint probability of phrase-structure nonterminals and string terminals but without an explicit composition operator.",
"Taking the example in Fig.",
"3(a) , the sequential syntactic LSTM would have fifteen 10 symbols on the LSTM when predicting the verb, as opposed to three symbols in the case of RNNGs' stack LSTM.",
"In theory, the sequential LSTM over the phrase-structure trees (Choe and Charniak, 2016) may be able to incorporate a similar, albeit implicit, composition process as RNNGs and consequently derive similarly syntactic heads, although there is no inductive bias that explicitly encourages such process.",
"Fig.",
"2 suggests that the sequential syntactic LSTMs (center) perform comparably with sequential LSTMs without syntax for multiple attractor cases, and worse than RNNGs for nearly all attractors; the gap is highest for multiple attractors.",
"This result showcases the importance of an explicit composition operator and hierarchical representations in identifying structural dependencies, as indicated by number agreement accuracy.",
"Our finding is consistent with the recent work of Yogatama et al.",
"(2018) , who find that introducing elements of hierarchical modeling through a stackstructured memory is beneficial for number agreement, outperforming LSTM language models and attention-augmented variants by increasing margins as the number of attractor grows.",
"Further Analysis In order to better interpret the results, we conduct further analysis into the perplexities of each model, followed by a discussion on the effect of incrementality constraints on the RNNG when predicting number agreement.",
"Perplexity.",
"To what extent does the success of RNNGs in the number agreement task with multiple attractors correlate with better performance under the perplexity metric?",
"We answer this question by using an importance sampling marginalization procedure (Dyer et al., 2016) to obtain an estimate of p(x) under both RNNGs and the sequential syntactic LSTM model.",
"Following Dyer et al.",
"(2016) , for each sentence on the validation set we sample 100 candidate trees from a discriminative model 11 as our proposal distribution.",
"As demonstrated in Table 3 , the LSTM language model has the lowest validation set perplexity despite substantially worse performance than RN-NGs in number agreement with multiple attractors, suggesting that lower perplexity is not neces-sarily correlated with number agreement success.",
"Incrementality constraints.",
"As the syntactic prefix was derived from a discriminative model that has access to unprocessed words, one potential concern is that this prefix might violate the incrementality constraints and benefit the RNNG over the LSTM language model.",
"To address this concern, we remark that the empirical evidence from Fig.",
"2 and Table 3 indicates that the LSTM language model without syntactic annotation outperforms the sequential LSTM with syntactic annotation in terms of both perplexity and number agreement throughout nearly all attractor settings, suggesting that the predicted syntactic prefix does not give any unfair advantage to the syntactic models.",
"Furthermore, we run an experiment where the syntactic prefix is instead derived from an incremental beam search procedure of Fried et al.",
"(2017) .",
"12 To this end, we take the highest scoring beam entry at the time that the verb is generated to be the syntactic prefix; this procedure is applied to both the correct and incorrect verb forms.",
"13 We then similarly compare the probabilities of the correct and incorrect verb form given each respective syntactic prefix to obtain number agreement accuracy.",
"Our finding suggests that using the fully incremental tree prefix leads to even better RNNG number agreement performance for four and five attractors, achieving 7.1% and 8.2% error rates, respectively, compared to 9.4% and 12% for the RNNG error rates in Fig.",
"2 .",
"Top-Down, Left-Corner, and Bottom-Up Traversals In this section, we propose two new variants of RNNGs that construct trees using a different con- 12 As the beam search procedure is time-consuming, we randomly sample 500 cases for each attractor and compute the number agreement accuracy on these samples.",
"13 Consequently, the correct and incorrect forms of the sentence might have different partial trees, as the highest scoring beam entries may be different for each alternative.",
"struction order than the top-down, left-to-right order used above.",
"These are a bottom-up construction order ( §4.1) and a left-corner construction order ( §4.2), analogous to the well-known parsing strategies (e.g.",
"Hale, 2014, chapter 3).",
"They differ from these classic strategies insofar as they do not announce the phrase-structural content of an entire branch at the same time, adopting instead a node-by-node enumeration reminescent of Markov Grammars (Charniak, 1997) .",
"This stepby-step arrangement extends to the derived string as well; since all variants generate words from left to right, the models can be compared using number agreement as a diagnostic.",
"14 Here we state our hypothesis on why the build order matters.",
"The three generation strategies represent different chain rule decompositions of the joint probability of strings and phrase-structure trees, thereby imposing different biases on the learner.",
"Earlier work in parsing has characterized the plausibility of top-down, left-corner, and bottom-up strategies as viable candidates of human sentence processing, especially in terms of memory constraints and human difficulties with center embedding constructions (Johnson-Laird, 1983; Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992 , inter alia), along with neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we cast the three strategies as models of language generation (Manning and Carpenter, 1997), and focus on the empirical question: which generation order has the most appropriate bias in modeling non-local structural dependencies in English?",
"These alternative orders organize the learning problem so as to yield intermediate states in generation that condition on different aspects of the grammatical structure.",
"In number agreement, this amounts to making an agreement controller, such as the word flowers in Fig.",
"3 , more or less salient.",
"If it is more salient, the model should be better-able to inflect the main verb in agreement with this controller, without getting distracted by the attractors.",
"The three proposed build orders are compared in Fig.",
"3 , showing the respective configurations (i.e.",
"the prefix) when generating the main verb in a sentence with a single attractor.",
"15 In ad-dition, we show concrete action sequences for a simpler sentence in each section.",
"Bottom-Up Traversal In bottom-up traversals, phrases are recursively constructed and labeled with the nonterminal type once all their daughters have been built, as illustrated in Fig.",
"4 .",
"Bottom-up traversals benefit from shorter stack depths compared to top-down due to the lack of incomplete nonterminals.",
"As the commitment to label the nonterminal type of a phrase is delayed until its constituents are complete, this means that the generation of a child node cannot condition on the label of its parent node.",
"In n-ary branching trees, bottom-up completion of constituents requires a procedure for determining how many of the most recent elements on the stack should be daughters of the node that is being constructed.",
"16 Conceptually, rather than having a single REDUCE operation as we have before, we have a complex REDUCE(X, n) operation that must determine the type of the constituent (i.e., X) as well as the number of daughters (i.e., n).",
"In step 5 of Fig.",
"4 , the newly formed NP constituent only covers the terminal worms, and neither the unattached terminal eats nor the constituent (NP The fox) is part of the new noun phrase.",
"We implement this extent decision using a stick-breaking construction-using the stack LSTM encoding, a single-layer feedforward network, and a logistic output layer-which decides whether the top element on the stack should be the leftmost child of the new constituent (i.e.",
"whether or not the new constituent is complete after popping the current topmost stack element), as illustrated in Fig.",
"5 .",
"If not, the process is then repeated after the topmost stack element is popped.",
"Once the extent of the new nonterminal has been decided, we parameterize the decision of the nonterminal label type; in Fig.",
"5 this is an NP.",
"A second difference to top-down generation is that when a single constituent remains on the stack, the sentence is not necessarily complete (see step 3 of Fig.",
"4 for examples where this happens).",
"We thus introduce an explicit STOP action (step 8, Fig.",
"4) , indicating the tree is complete, which is only assigned non-zero probability when the stack has a during the history of the full generation process vary considerably in the invariances and the kinds of actions they predict.",
"16 This mechanism is not necessary with strictly binary branching trees, since each new nonterminal always consists of the two children at the top of the stack.",
"Left-Corner Traversal Left-corner traversals combine some aspects of top-down and bottom-up processing.",
"As illustrated in Fig.",
"6 , this works by first generating the leftmost terminal of the tree, The (step 0), before proceeding bottom-up to predict its parent NP (step 1) and then top-down to predict the rest of its children (step 2).",
"A REDUCE action similarly calls the composition operator once the phrase is complete (e.g.",
"step 3).",
"The complete constituent (NP The fox) is the leftmost child of its parent node, thus an NT SW(S) action is done next (step 4).",
"The NT SW(X) action is similar to the NT(X) from the top-down generator, in that it introduces an open nonterminal node and must be matched later by a corresponding REDUCE operation, but, in addition, swaps the two topmost elements at the top of the stack.",
"This is necessary because the parent nonterminal node is not built until after its left-most child has been constructed.",
"In step 1 of Fig.",
"6 , with a single element The on the stack, the action NT SW(NP) adds the open nonterminal symbol NP to become the topmost stack element, but after applying the swap operator the stack now contains (NP | The (step 2).",
"Experiments We optimize the hyper-parameters of each RNNG variant using grid searches based on validation set perplexity.",
"Table 4 summarizes average stack depths and perplexities 17 on the Linzen et al.",
"(2016) validation set.",
"We evaluate each of the variants in terms of number agreement accuracy as an evidence of its suitability to model structural dependencies in English, presented in Table 5 .",
"To account for randomness in training, we report the error rate summary statistics of ten different runs.",
"Avg.",
"(±sdev)/min/max n=2 n=3 n=4 LM 5.8(±0.2)/5.5/6.0 9.6(±0.7)/8.8/10.1 14.1(±1.2)/13.0/15.3 TD 5.5(±0.4)/4.9/5.8 7.8(±0.6)/7.4/8.0 8.9(±1.1)/7.9/9.8 LC 5.4(±0.3)/5.2/5.5 8.2(±0.4)/7.9/8.7 9.9(±1.3)/8.8/11.5 BU 5.7(±0.3) 5.5/5.8 8.5(±0.7)/8.0/9.3 9.7(±1.1)/9.0/11.3 Table 5 : Number agreement error rates for topdown (TD), left-corner (LC), and bottom-up (BU) RNNGs, broken down by the number of attractors.",
"LM indicates the best sequential language model baseline ( §2).",
"We report the mean, standard deviation, and minimum/maximum of 10 different random seeds of each model.",
"Discussion.",
"In Table 5 , we focus on empirical results for cases where the structural dependencies matter the most, corresponding to cases with two, three, and four attractors.",
"All three RNNG variants outperform the sequential LSTM language model baseline for these cases.",
"Nevertheless, the top-down variant outperforms both left-corner and bottom-up strategies for difficult cases with three or more attractors, suggesting that the top-down strategy is most appropriately biased to model difficult number agreement dependencies in English.",
"We run an approximate randomization test by stratifying the output and permuting within each stratum (Yeh, 2000) and find that, for four attractors, the performance difference between the top-down RNNG and the other variants is statistically significant at p < 0.05.",
"The success of the top-down traversal in the domain of number-agreement prediction is consistent with a classical view in parsing that argues top-down parsing is the most human-like parsing strategy since it is the most anticipatory.",
"Only anticipatory representations, it is said, could explain the rapid, incremental processing that humans seem to exhibit (Marslen-Wilson, 1973; Tanenhaus et al., 1995) ; this line of thinking similarly motivates Charniak (2010) , among others.",
"While most work in this domain has been concerned with the parsing problem, our findings suggest that anticipatory mechanisms are also beneficial in capturing structural dependencies in language modeling.",
"We note that our results are achieved using models that, in theory, are able to condition on the entire derivation history, while earlier work in sentence processing has focused on cognitive memory considerations, such as the memory-bounded model of Schuler et al.",
"(2010) .",
"Conclusion Given enough capacity, LSTMs trained on language modeling objectives are able to learn syntax-sensitive dependencies, as evidenced by accurate number agreement accuracy with multiple attractors.",
"Despite this strong performance, we discover explicit modeling of structure does improve the model's ability to discover non-local structural dependencies when determining the distribution over subsequent word generation.",
"Recurrent neural network grammars (RNNGs), which jointly model phrase-structure trees and strings and employ an explicit composition operator, substantially outperform LSTM language models and syntactic language models without explicit compositions; this highlights the importance of a hierarchical inductive bias in capturing structural dependencies.",
"We explore the possibility that how the structure is built affects number agreement performance.",
"Through novel extensions to RNNGs that enable the use of left-corner and bottom-up generation strategies, we discover that this is indeed the case: the three RNNG variants have different generalization properties for number agreement, with the top-down traversal strategy performing best for cases with multiple attractors."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"4.3",
"5"
],
"paper_header_content": [
"Introduction",
"Number Agreement with LSTM Language Models",
"Number Agreement with RNNGs",
"Recurrent Neural Network Grammars",
"Experiments",
"Further Analysis",
"Top-Down, Left-Corner, and Bottom-Up Traversals",
"Bottom-Up Traversal",
"Left-Corner Traversal",
"Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-30#paper-1041#slide-9 | Can Character LSTMs Learn Number Agreement Well | Character LSTMs have been used in various tasks, including machine translation, language modelling, and many others.
It is easier to exploit morphological cues.
Model has to resolve dependencies between sequences of tokens.
The sequential dependencies are much longer.
LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | Character LSTMs have been used in various tasks, including machine translation, language modelling, and many others.
It is easier to exploit morphological cues.
Model has to resolve dependencies between sequences of tokens.
The sequential dependencies are much longer.
LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | [] |
GEM-SciDuet-train-30#paper-1041#slide-10 | 1041 | LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modeling Structure Makes Them Better | Language exhibits hierarchical structure, but recent work using a subject-verb agreement diagnostic argued that state-ofthe-art language models, LSTMs, fail to learn long-range syntax-sensitive dependencies. Using the same diagnostic, we show that, in fact, LSTMs do succeed in learning such dependencies-provided they have enough capacity. We then explore whether models that have access to explicit syntactic information learn agreement more effectively, and how the way in which this structural information is incorporated into the model impacts performance. We find that the mere presence of syntactic information does not improve accuracy, but when model architecture is determined by syntax, number agreement is improved. Further, we find that the choice of how syntactic structure is built affects how well number agreement is learned: top-down construction outperforms leftcorner and bottom-up variants in capturing long-distance structural dependencies. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189
],
"paper_content_text": [
"Introduction Recurrent neural networks (RNNs) are remarkably effective models of sequential data.",
"Recent years have witnessed the widespread adoption of recurrent architectures such as LSTMs (Hochreiter and Schmidhuber, 1997) in various NLP tasks, with state of the art results in language modeling and conditional generation tasks like machine translation (Bahdanau et al., 2015) and text summarization (See et al., 2017) .",
"Here we revisit the question asked by Linzen et al.",
"(2016) : as RNNs model word sequences without explicit notions of hierarchical structure, Figure 1 : An example of the number agreement task with two attractors and a subject-verb distance of five.",
"to what extent are these models able to learn non-local syntactic dependencies in natural language?",
"Identifying number agreement between subjects and verbs-especially in the presence of attractors-can be understood as a cognitivelymotivated probe that seeks to distinguish hierarchical theories from sequential ones, as models that rely on sequential cues like the most recent noun would favor the incorrect verb form.",
"We provide an example of this task in Fig.",
"1 , where the plural form of the verb have agrees with the distant subject parts, rather than the adjacent attractors (underlined) of the singular form.",
"Contrary to the findings of Linzen et al.",
"(2016) , our experiments suggest that sequential LSTMs are able to capture structural dependencies to a large extent, even for cases with multiple attractors ( §2).",
"Our finding suggests that network capacity plays a crucial role in capturing structural dependencies with multiple attractors.",
"Nevertheless, we find that a strong character LSTM language model-which lacks explicit word representation and has to capture much longer sequential dependencies in order to learn non-local structural dependencies effectively-performs much worse in the number agreement task.",
"Given the strong performance of word-based LSTM language models, are there are any substantial benefits, in terms of number agreement accuracy, to explicitly modeling hierarchical structures as an inductive bias?",
"We discover that a certain class of LSTM language models that explicitly models syntactic structures, the recurrent neural network grammars (Dyer et al., 2016, RN-NGs) , considerably outperforms sequential LSTM language models for cases with multiple attractors ( §3).",
"We present experiments affirming that this gain is due to an explicit composition operator rather than the presence of predicted syntactic annotations.",
"Rather surprisingly, syntactic LSTM language models without explicit composition have no advantage over sequential LSTMs that operate on word sequences, although these models can nevertheless be excellent predictors of phrase structures (Choe and Charniak, 2016) .",
"Having established the importance of modeling structures, we explore the hypothesis that how we build the structure affects the model's ability to identify structural dependencies in English.",
"As RNNGs build phrase-structure trees through top-down operations, we propose extensions to the structure-building sequences and model architecture that enable left-corner (Henderson, 2003 (Henderson, , 2004 and bottom-up (Chelba and Jelinek, 2000; Emami and Jelinek, 2005) generation orders ( §4).",
"Extensive prior work has characterized topdown, left-corner, and bottom-up parsing strategies in terms of cognitive plausibility (Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992) and neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we move away from the realm of parsing and evaluate the three strategies as models of generation instead, and address the following empirical question: which generation order is most appropriately biased to model structural dependencies in English, as indicated by number agreement accuracy?",
"Our key finding is that the top-down generation outperforms left-corner and bottom-up variants for difficult cases with multiple attractors.",
"In theory, the three traversal strategies approximate the same chain rule that decompose the joint probability of words and phrase-structure trees, denoted as p(x, y), differently and as such will impose different biases on the learner.",
"In §4.3, we show that the three variants achieve similar perplexities on a held-out validation set.",
"As we observe different patterns in number agreement, this demonstrates that while perplexity can be a useful diagnostic tool, it may not be sensitive enough for comparing models in terms of how well they capture grammatical intuitions.",
"Number Agreement with LSTM Language Models We revisit the number agreement task with LSTMs trained on language modeling objectives, as proposed by Linzen et al.",
"(2016) .",
"Experimental Settings.",
"We use the same parsed Wikipedia corpus, verb inflectors, preprocessing steps, and dataset split as Linzen et al.",
"(2016) .",
"1 Word types beyond the most frequent 10,000 are converted to their respective POS tags.",
"We summarize the corpus statistics of the dataset, along with the test set distribution of the number of attractors, in Table 1 .",
"Similar to Linzen et al.",
"(2016) , we only include test cases where all intervening nouns are of the opposite number forms than the subject noun.",
"All models are implemented using the DyNet library (Neubig et al., 2017) .",
"Training was done using a language modeling objective that predicts the next word given the prefix; at test time we compute agreement error rates by comparing the probability of the correct verb form with the incorrect one.",
"We report performance of a few different LSTM hidden layer configurations, while other hyper-parameters are selected based on a grid search.",
"2 Following Linzen Table 2 indicates that, given enough capacity, LSTM language models without explicit syntactic supervision are able to perform well in number agreement.",
"For cases with multiple attractors, we observe that the LSTM language model with 50 hidden units trails behind its larger counterparts by a substantial margin despite comparable performance for zero attractor cases, suggesting that network capacity plays an especially important role in propagating relevant structural information across a large number of steps.",
"5 Our experiment independently derives the same finding as the recent work of Gulordava et al.",
"(2018) , who also find that LSTMs trained with language modeling objectives are able to learn number agreement well; here we additionally identify model capacity as one of the reasons for the discrepancy with the Linzen et al.",
"(2016) results.",
"While the pretrained large-scale language model of Jozefowicz et al.",
"(2016) has certain advantages in terms of model capacity, more training data, and richer vocabulary, we suspect that the poorer performance is due to differences between their training domain and the number agreement testing domain, although the model still performs reasonably well in the number agreement test set.",
"Prior work has confirmed the notion that, in many cases, statistical models are able to achieve good performance under some aggregate metric by overfitting to patterns that are predictive in most cases, often at the expense of more difficult, infrequent instances that require deeper language understanding abilities (Rimell et al., 2009; Jia and Liang, 2017) .",
"In the vast majority of cases, structural dependencies between subjects and verbs highly overlap with sequential dependencies (Table 1).",
"Nevertheless, the fact that number agreement accuracy gets worse as the number of attractors increases is consistent with a sequential recency bias in LSTMs: under this conjecture, identifying the correct structural dependency becomes harder when there are more adjacent nouns of different number forms than the true subject.",
"If the sequential recency conjecture is correct, then LSTMs would perform worse when the structural dependency is more distant in the sequences, compared to cases where the structural dependency is more adjacent.",
"We empirically test this conjecture by running a strong character-based LSTM language model of that achieved state of the art results on EnWiki8 from the Hutter Prize dataset (Hutter, 2012) , with 1,800 hidden units and 10 million parameters.",
"The character LSTM is trained, validated, and tested 6 on the same split of the Linzen et al.",
"(2016) number agreement dataset.",
"A priori, we expect that number agreement is harder for character LSTMs for two reasons.",
"First, character LSTMs lack explicit word representaforms much better for cases with multiple attractors.",
"tions, thus succeeding in this task requires identifying structural dependencies between two sequences of character tokens, while word-based LSTMs only need to resolve dependencies between word tokens.",
"Second, by nature of modeling characters, non-local structural dependencies are sequentially further apart than in the wordbased language model.",
"On the other hand, character LSTMs have the ability to exploit and share informative morphological cues, such as the fact that plural nouns in English tend to end with 's'.",
"As demonstrated on the last row of Table 2 , we find that the character LSTM language model performs much worse at number agreement with multiple attractors compared to its word-based counterparts.",
"This finding is consistent with that of Sennrich (2017) , who find that character-level decoders in neural machine translation perform worse than subword models in capturing morphosyntactic agreement.",
"To some extent, our finding demonstrates the limitations that character LSTMs face in learning structure from language modeling objectives, despite earlier evidence that character LSTM language models are able to implicitly acquire a lexicon (Le Godais et al., 2017) .",
"Number Agreement with RNNGs Given the strong performance of sequential LSTMs in number agreement, is there any further benefit to explicitly modeling hierarchical structures?",
"We focus on recurrent neural network grammars (Dyer et al., 2016, RNNGs) , which jointly model the probability of phrase-structure trees and strings, p(x, y), through structurebuilding actions and explicit compositions for representing completed constituents.",
"Our choice of RNNGs is motivated by the findings of Kuncoro et al.",
"(2017) , who find evidence for syntactic headedness in RNNG phrasal representations.",
"Intuitively, the ability to learn heads is beneficial for this task, as the representation for the noun phrase \"The flowers in the vase\" would be similar to the syntactic head flowers rather than vase.",
"In some sense, the composition operator can be understood as injecting a structural recency bias into the model design, as subjects and verbs that are sequentially apart are encouraged to be close together in the RNNGs' representation.",
"Recurrent Neural Network Grammars RNNGs (Dyer et al., 2016) are language models that estimate the joint probability of string terminals and phrase-structure tree nonterminals.",
"Here we use stack-only RNNGs that achieve better perplexity and parsing performance (Kuncoro et al., 2017) .",
"Given the current stack configuration, the objective function of RNNGs is to predict the correct structure-building operation according to a top-down, left-to-right traversal of the phrasestructure tree; a partial traversal for the input sentence \"The flowers in the vase are blooming\" is illustrated in Fig.",
"3(a) .",
"7 The structural inductive bias of RNNGs derives from an explicit composition operator that represents completed constituents; for instance, the constituent (NP The flowers) is represented by a single composite element on the stack, rather than as four separate symbols.",
"During each REDUCE action, the topmost stack elements that belong to the new constituent are popped from the stack and then composed by the composition function; the composed symbol is then pushed back into the stack.",
"The model is trained in an end-to-end manner by minimizing the cross-entropy loss relative to a sample of gold trees.",
"Experiments Here we summarize the experimental settings of running RNNGs on the number agreement dataset and discuss the empirical findings.",
"Experimental settings.",
"We obtain phrasestructure trees for the Linzen et al.",
"(2016) dataset using a publicly available discriminative model 8 trained on the Penn Treebank (Marcus et al., 1993) .",
"At training time, we use these predicted trees to derive action sequences on the training set, and train the RNNG model on these sequences.",
"9 At test time, we compare the probabilities of the correct and incorrect verb forms given the prefix, which now includes both nonterminal and terminal symbols.",
"An example of the stack contents (i.e.",
"the prefix) when predicting the verb is provided in Fig.",
"3(a) .",
"We similarly run a grid search over the same hyper-parameter range as the sequential LSTM and compare the results with the strongest sequential LSTM baseline from §2.",
"Figure 2 : Number agreement error rates for sequential LSTM language models (left), sequential syntactic LSTM language models (Choe and Charniak, 2016, center), and RNNGs (right).",
"Discussion.",
"Fig.",
"2 shows that RNNGs (rightmost) achieve much better number agreement accuracy compared to LSTM language models (leftmost) for difficult cases with four and five attractors, with around 30% error rate reductions, along with a 13% error rate reduction (from 9% to 7.8%) for three attractors.",
"We attribute the slightly worse performance of RNNGs on cases with zero and one attractor to the presence of intervening structure-building actions that separate the subject and the verb, such as a REDUCE (step 6 in Fig.",
"3(a) ) action to complete the noun phrase and at least one action to predict a verb phrase (step 15 in Fig.",
"3(a) ) before the verb itself is introduced, while LSTM language models benefit from shorter dependencies for zero and one attractor cases.",
"The performance gain of RNNGs might arise from two potential causes.",
"First, RNNGs have access to predicted syntactic annotations, while LSTM language models operate solely on word sequences.",
"Second, RNNGs incorporate explicit compositions, which encourage hierarhical representations and potentially the discovery of syntactic (rather than sequential) dependencies.",
"Would LSTMs that have access to syntactic annotations, but without the explicit composition function, benefit from the same performance gain as RNNGs?",
"To answer this question, we run sequential LSTMs over the same phrase-structure trees (Choe and Charniak, 2016) , similarly estimating the joint probability of phrase-structure nonterminals and string terminals but without an explicit composition operator.",
"Taking the example in Fig.",
"3(a) , the sequential syntactic LSTM would have fifteen 10 symbols on the LSTM when predicting the verb, as opposed to three symbols in the case of RNNGs' stack LSTM.",
"In theory, the sequential LSTM over the phrase-structure trees (Choe and Charniak, 2016) may be able to incorporate a similar, albeit implicit, composition process as RNNGs and consequently derive similarly syntactic heads, although there is no inductive bias that explicitly encourages such process.",
"Fig.",
"2 suggests that the sequential syntactic LSTMs (center) perform comparably with sequential LSTMs without syntax for multiple attractor cases, and worse than RNNGs for nearly all attractors; the gap is highest for multiple attractors.",
"This result showcases the importance of an explicit composition operator and hierarchical representations in identifying structural dependencies, as indicated by number agreement accuracy.",
"Our finding is consistent with the recent work of Yogatama et al.",
"(2018) , who find that introducing elements of hierarchical modeling through a stackstructured memory is beneficial for number agreement, outperforming LSTM language models and attention-augmented variants by increasing margins as the number of attractor grows.",
"Further Analysis In order to better interpret the results, we conduct further analysis into the perplexities of each model, followed by a discussion on the effect of incrementality constraints on the RNNG when predicting number agreement.",
"Perplexity.",
"To what extent does the success of RNNGs in the number agreement task with multiple attractors correlate with better performance under the perplexity metric?",
"We answer this question by using an importance sampling marginalization procedure (Dyer et al., 2016) to obtain an estimate of p(x) under both RNNGs and the sequential syntactic LSTM model.",
"Following Dyer et al.",
"(2016) , for each sentence on the validation set we sample 100 candidate trees from a discriminative model 11 as our proposal distribution.",
"As demonstrated in Table 3 , the LSTM language model has the lowest validation set perplexity despite substantially worse performance than RN-NGs in number agreement with multiple attractors, suggesting that lower perplexity is not neces-sarily correlated with number agreement success.",
"Incrementality constraints.",
"As the syntactic prefix was derived from a discriminative model that has access to unprocessed words, one potential concern is that this prefix might violate the incrementality constraints and benefit the RNNG over the LSTM language model.",
"To address this concern, we remark that the empirical evidence from Fig.",
"2 and Table 3 indicates that the LSTM language model without syntactic annotation outperforms the sequential LSTM with syntactic annotation in terms of both perplexity and number agreement throughout nearly all attractor settings, suggesting that the predicted syntactic prefix does not give any unfair advantage to the syntactic models.",
"Furthermore, we run an experiment where the syntactic prefix is instead derived from an incremental beam search procedure of Fried et al.",
"(2017) .",
"12 To this end, we take the highest scoring beam entry at the time that the verb is generated to be the syntactic prefix; this procedure is applied to both the correct and incorrect verb forms.",
"13 We then similarly compare the probabilities of the correct and incorrect verb form given each respective syntactic prefix to obtain number agreement accuracy.",
"Our finding suggests that using the fully incremental tree prefix leads to even better RNNG number agreement performance for four and five attractors, achieving 7.1% and 8.2% error rates, respectively, compared to 9.4% and 12% for the RNNG error rates in Fig.",
"2 .",
"Top-Down, Left-Corner, and Bottom-Up Traversals In this section, we propose two new variants of RNNGs that construct trees using a different con- 12 As the beam search procedure is time-consuming, we randomly sample 500 cases for each attractor and compute the number agreement accuracy on these samples.",
"13 Consequently, the correct and incorrect forms of the sentence might have different partial trees, as the highest scoring beam entries may be different for each alternative.",
"struction order than the top-down, left-to-right order used above.",
"These are a bottom-up construction order ( §4.1) and a left-corner construction order ( §4.2), analogous to the well-known parsing strategies (e.g.",
"Hale, 2014, chapter 3).",
"They differ from these classic strategies insofar as they do not announce the phrase-structural content of an entire branch at the same time, adopting instead a node-by-node enumeration reminescent of Markov Grammars (Charniak, 1997) .",
"This stepby-step arrangement extends to the derived string as well; since all variants generate words from left to right, the models can be compared using number agreement as a diagnostic.",
"14 Here we state our hypothesis on why the build order matters.",
"The three generation strategies represent different chain rule decompositions of the joint probability of strings and phrase-structure trees, thereby imposing different biases on the learner.",
"Earlier work in parsing has characterized the plausibility of top-down, left-corner, and bottom-up strategies as viable candidates of human sentence processing, especially in terms of memory constraints and human difficulties with center embedding constructions (Johnson-Laird, 1983; Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992 , inter alia), along with neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we cast the three strategies as models of language generation (Manning and Carpenter, 1997), and focus on the empirical question: which generation order has the most appropriate bias in modeling non-local structural dependencies in English?",
"These alternative orders organize the learning problem so as to yield intermediate states in generation that condition on different aspects of the grammatical structure.",
"In number agreement, this amounts to making an agreement controller, such as the word flowers in Fig.",
"3 , more or less salient.",
"If it is more salient, the model should be better-able to inflect the main verb in agreement with this controller, without getting distracted by the attractors.",
"The three proposed build orders are compared in Fig.",
"3 , showing the respective configurations (i.e.",
"the prefix) when generating the main verb in a sentence with a single attractor.",
"15 In ad-dition, we show concrete action sequences for a simpler sentence in each section.",
"Bottom-Up Traversal In bottom-up traversals, phrases are recursively constructed and labeled with the nonterminal type once all their daughters have been built, as illustrated in Fig.",
"4 .",
"Bottom-up traversals benefit from shorter stack depths compared to top-down due to the lack of incomplete nonterminals.",
"As the commitment to label the nonterminal type of a phrase is delayed until its constituents are complete, this means that the generation of a child node cannot condition on the label of its parent node.",
"In n-ary branching trees, bottom-up completion of constituents requires a procedure for determining how many of the most recent elements on the stack should be daughters of the node that is being constructed.",
"16 Conceptually, rather than having a single REDUCE operation as we have before, we have a complex REDUCE(X, n) operation that must determine the type of the constituent (i.e., X) as well as the number of daughters (i.e., n).",
"In step 5 of Fig.",
"4 , the newly formed NP constituent only covers the terminal worms, and neither the unattached terminal eats nor the constituent (NP The fox) is part of the new noun phrase.",
"We implement this extent decision using a stick-breaking construction-using the stack LSTM encoding, a single-layer feedforward network, and a logistic output layer-which decides whether the top element on the stack should be the leftmost child of the new constituent (i.e.",
"whether or not the new constituent is complete after popping the current topmost stack element), as illustrated in Fig.",
"5 .",
"If not, the process is then repeated after the topmost stack element is popped.",
"Once the extent of the new nonterminal has been decided, we parameterize the decision of the nonterminal label type; in Fig.",
"5 this is an NP.",
"A second difference to top-down generation is that when a single constituent remains on the stack, the sentence is not necessarily complete (see step 3 of Fig.",
"4 for examples where this happens).",
"We thus introduce an explicit STOP action (step 8, Fig.",
"4) , indicating the tree is complete, which is only assigned non-zero probability when the stack has a during the history of the full generation process vary considerably in the invariances and the kinds of actions they predict.",
"16 This mechanism is not necessary with strictly binary branching trees, since each new nonterminal always consists of the two children at the top of the stack.",
"Left-Corner Traversal Left-corner traversals combine some aspects of top-down and bottom-up processing.",
"As illustrated in Fig.",
"6 , this works by first generating the leftmost terminal of the tree, The (step 0), before proceeding bottom-up to predict its parent NP (step 1) and then top-down to predict the rest of its children (step 2).",
"A REDUCE action similarly calls the composition operator once the phrase is complete (e.g.",
"step 3).",
"The complete constituent (NP The fox) is the leftmost child of its parent node, thus an NT SW(S) action is done next (step 4).",
"The NT SW(X) action is similar to the NT(X) from the top-down generator, in that it introduces an open nonterminal node and must be matched later by a corresponding REDUCE operation, but, in addition, swaps the two topmost elements at the top of the stack.",
"This is necessary because the parent nonterminal node is not built until after its left-most child has been constructed.",
"In step 1 of Fig.",
"6 , with a single element The on the stack, the action NT SW(NP) adds the open nonterminal symbol NP to become the topmost stack element, but after applying the swap operator the stack now contains (NP | The (step 2).",
"Experiments We optimize the hyper-parameters of each RNNG variant using grid searches based on validation set perplexity.",
"Table 4 summarizes average stack depths and perplexities 17 on the Linzen et al.",
"(2016) validation set.",
"We evaluate each of the variants in terms of number agreement accuracy as an evidence of its suitability to model structural dependencies in English, presented in Table 5 .",
"To account for randomness in training, we report the error rate summary statistics of ten different runs.",
"Avg.",
"(±sdev)/min/max n=2 n=3 n=4 LM 5.8(±0.2)/5.5/6.0 9.6(±0.7)/8.8/10.1 14.1(±1.2)/13.0/15.3 TD 5.5(±0.4)/4.9/5.8 7.8(±0.6)/7.4/8.0 8.9(±1.1)/7.9/9.8 LC 5.4(±0.3)/5.2/5.5 8.2(±0.4)/7.9/8.7 9.9(±1.3)/8.8/11.5 BU 5.7(±0.3) 5.5/5.8 8.5(±0.7)/8.0/9.3 9.7(±1.1)/9.0/11.3 Table 5 : Number agreement error rates for topdown (TD), left-corner (LC), and bottom-up (BU) RNNGs, broken down by the number of attractors.",
"LM indicates the best sequential language model baseline ( §2).",
"We report the mean, standard deviation, and minimum/maximum of 10 different random seeds of each model.",
"Discussion.",
"In Table 5 , we focus on empirical results for cases where the structural dependencies matter the most, corresponding to cases with two, three, and four attractors.",
"All three RNNG variants outperform the sequential LSTM language model baseline for these cases.",
"Nevertheless, the top-down variant outperforms both left-corner and bottom-up strategies for difficult cases with three or more attractors, suggesting that the top-down strategy is most appropriately biased to model difficult number agreement dependencies in English.",
"We run an approximate randomization test by stratifying the output and permuting within each stratum (Yeh, 2000) and find that, for four attractors, the performance difference between the top-down RNNG and the other variants is statistically significant at p < 0.05.",
"The success of the top-down traversal in the domain of number-agreement prediction is consistent with a classical view in parsing that argues top-down parsing is the most human-like parsing strategy since it is the most anticipatory.",
"Only anticipatory representations, it is said, could explain the rapid, incremental processing that humans seem to exhibit (Marslen-Wilson, 1973; Tanenhaus et al., 1995) ; this line of thinking similarly motivates Charniak (2010) , among others.",
"While most work in this domain has been concerned with the parsing problem, our findings suggest that anticipatory mechanisms are also beneficial in capturing structural dependencies in language modeling.",
"We note that our results are achieved using models that, in theory, are able to condition on the entire derivation history, while earlier work in sentence processing has focused on cognitive memory considerations, such as the memory-bounded model of Schuler et al.",
"(2010) .",
"Conclusion Given enough capacity, LSTMs trained on language modeling objectives are able to learn syntax-sensitive dependencies, as evidenced by accurate number agreement accuracy with multiple attractors.",
"Despite this strong performance, we discover explicit modeling of structure does improve the model's ability to discover non-local structural dependencies when determining the distribution over subsequent word generation.",
"Recurrent neural network grammars (RNNGs), which jointly model phrase-structure trees and strings and employ an explicit composition operator, substantially outperform LSTM language models and syntactic language models without explicit compositions; this highlights the importance of a hierarchical inductive bias in capturing structural dependencies.",
"We explore the possibility that how the structure is built affects number agreement performance.",
"Through novel extensions to RNNGs that enable the use of left-corner and bottom-up generation strategies, we discover that this is indeed the case: the three RNNG variants have different generalization properties for number agreement, with the top-down traversal strategy performing best for cases with multiple attractors."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"4.3",
"5"
],
"paper_header_content": [
"Introduction",
"Number Agreement with LSTM Language Models",
"Number Agreement with RNNGs",
"Recurrent Neural Network Grammars",
"Experiments",
"Further Analysis",
"Top-Down, Left-Corner, and Bottom-Up Traversals",
"Bottom-Up Traversal",
"Left-Corner Traversal",
"Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-30#paper-1041#slide-10 | Character LSTM Agreement Error Rates | model on Hutter Prize, with 27M parameters.
Trained, validated, and tested on the same data.
Strong character LSTM model performs much worse for multiple attractor cases
Consistent with earlier work
LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | model on Hutter Prize, with 27M parameters.
Trained, validated, and tested on the same data.
Strong character LSTM model performs much worse for multiple attractor cases
Consistent with earlier work
LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | [] |
GEM-SciDuet-train-30#paper-1041#slide-11 | 1041 | LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modeling Structure Makes Them Better | Language exhibits hierarchical structure, but recent work using a subject-verb agreement diagnostic argued that state-ofthe-art language models, LSTMs, fail to learn long-range syntax-sensitive dependencies. Using the same diagnostic, we show that, in fact, LSTMs do succeed in learning such dependencies-provided they have enough capacity. We then explore whether models that have access to explicit syntactic information learn agreement more effectively, and how the way in which this structural information is incorporated into the model impacts performance. We find that the mere presence of syntactic information does not improve accuracy, but when model architecture is determined by syntax, number agreement is improved. Further, we find that the choice of how syntactic structure is built affects how well number agreement is learned: top-down construction outperforms leftcorner and bottom-up variants in capturing long-distance structural dependencies. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189
],
"paper_content_text": [
"Introduction Recurrent neural networks (RNNs) are remarkably effective models of sequential data.",
"Recent years have witnessed the widespread adoption of recurrent architectures such as LSTMs (Hochreiter and Schmidhuber, 1997) in various NLP tasks, with state of the art results in language modeling and conditional generation tasks like machine translation (Bahdanau et al., 2015) and text summarization (See et al., 2017) .",
"Here we revisit the question asked by Linzen et al.",
"(2016) : as RNNs model word sequences without explicit notions of hierarchical structure, Figure 1 : An example of the number agreement task with two attractors and a subject-verb distance of five.",
"to what extent are these models able to learn non-local syntactic dependencies in natural language?",
"Identifying number agreement between subjects and verbs-especially in the presence of attractors-can be understood as a cognitivelymotivated probe that seeks to distinguish hierarchical theories from sequential ones, as models that rely on sequential cues like the most recent noun would favor the incorrect verb form.",
"We provide an example of this task in Fig.",
"1 , where the plural form of the verb have agrees with the distant subject parts, rather than the adjacent attractors (underlined) of the singular form.",
"Contrary to the findings of Linzen et al.",
"(2016) , our experiments suggest that sequential LSTMs are able to capture structural dependencies to a large extent, even for cases with multiple attractors ( §2).",
"Our finding suggests that network capacity plays a crucial role in capturing structural dependencies with multiple attractors.",
"Nevertheless, we find that a strong character LSTM language model-which lacks explicit word representation and has to capture much longer sequential dependencies in order to learn non-local structural dependencies effectively-performs much worse in the number agreement task.",
"Given the strong performance of word-based LSTM language models, are there are any substantial benefits, in terms of number agreement accuracy, to explicitly modeling hierarchical structures as an inductive bias?",
"We discover that a certain class of LSTM language models that explicitly models syntactic structures, the recurrent neural network grammars (Dyer et al., 2016, RN-NGs) , considerably outperforms sequential LSTM language models for cases with multiple attractors ( §3).",
"We present experiments affirming that this gain is due to an explicit composition operator rather than the presence of predicted syntactic annotations.",
"Rather surprisingly, syntactic LSTM language models without explicit composition have no advantage over sequential LSTMs that operate on word sequences, although these models can nevertheless be excellent predictors of phrase structures (Choe and Charniak, 2016) .",
"Having established the importance of modeling structures, we explore the hypothesis that how we build the structure affects the model's ability to identify structural dependencies in English.",
"As RNNGs build phrase-structure trees through top-down operations, we propose extensions to the structure-building sequences and model architecture that enable left-corner (Henderson, 2003 (Henderson, , 2004 and bottom-up (Chelba and Jelinek, 2000; Emami and Jelinek, 2005) generation orders ( §4).",
"Extensive prior work has characterized topdown, left-corner, and bottom-up parsing strategies in terms of cognitive plausibility (Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992) and neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we move away from the realm of parsing and evaluate the three strategies as models of generation instead, and address the following empirical question: which generation order is most appropriately biased to model structural dependencies in English, as indicated by number agreement accuracy?",
"Our key finding is that the top-down generation outperforms left-corner and bottom-up variants for difficult cases with multiple attractors.",
"In theory, the three traversal strategies approximate the same chain rule that decompose the joint probability of words and phrase-structure trees, denoted as p(x, y), differently and as such will impose different biases on the learner.",
"In §4.3, we show that the three variants achieve similar perplexities on a held-out validation set.",
"As we observe different patterns in number agreement, this demonstrates that while perplexity can be a useful diagnostic tool, it may not be sensitive enough for comparing models in terms of how well they capture grammatical intuitions.",
"Number Agreement with LSTM Language Models We revisit the number agreement task with LSTMs trained on language modeling objectives, as proposed by Linzen et al.",
"(2016) .",
"Experimental Settings.",
"We use the same parsed Wikipedia corpus, verb inflectors, preprocessing steps, and dataset split as Linzen et al.",
"(2016) .",
"1 Word types beyond the most frequent 10,000 are converted to their respective POS tags.",
"We summarize the corpus statistics of the dataset, along with the test set distribution of the number of attractors, in Table 1 .",
"Similar to Linzen et al.",
"(2016) , we only include test cases where all intervening nouns are of the opposite number forms than the subject noun.",
"All models are implemented using the DyNet library (Neubig et al., 2017) .",
"Training was done using a language modeling objective that predicts the next word given the prefix; at test time we compute agreement error rates by comparing the probability of the correct verb form with the incorrect one.",
"We report performance of a few different LSTM hidden layer configurations, while other hyper-parameters are selected based on a grid search.",
"2 Following Linzen Table 2 indicates that, given enough capacity, LSTM language models without explicit syntactic supervision are able to perform well in number agreement.",
"For cases with multiple attractors, we observe that the LSTM language model with 50 hidden units trails behind its larger counterparts by a substantial margin despite comparable performance for zero attractor cases, suggesting that network capacity plays an especially important role in propagating relevant structural information across a large number of steps.",
"5 Our experiment independently derives the same finding as the recent work of Gulordava et al.",
"(2018) , who also find that LSTMs trained with language modeling objectives are able to learn number agreement well; here we additionally identify model capacity as one of the reasons for the discrepancy with the Linzen et al.",
"(2016) results.",
"While the pretrained large-scale language model of Jozefowicz et al.",
"(2016) has certain advantages in terms of model capacity, more training data, and richer vocabulary, we suspect that the poorer performance is due to differences between their training domain and the number agreement testing domain, although the model still performs reasonably well in the number agreement test set.",
"Prior work has confirmed the notion that, in many cases, statistical models are able to achieve good performance under some aggregate metric by overfitting to patterns that are predictive in most cases, often at the expense of more difficult, infrequent instances that require deeper language understanding abilities (Rimell et al., 2009; Jia and Liang, 2017) .",
"In the vast majority of cases, structural dependencies between subjects and verbs highly overlap with sequential dependencies (Table 1).",
"Nevertheless, the fact that number agreement accuracy gets worse as the number of attractors increases is consistent with a sequential recency bias in LSTMs: under this conjecture, identifying the correct structural dependency becomes harder when there are more adjacent nouns of different number forms than the true subject.",
"If the sequential recency conjecture is correct, then LSTMs would perform worse when the structural dependency is more distant in the sequences, compared to cases where the structural dependency is more adjacent.",
"We empirically test this conjecture by running a strong character-based LSTM language model of that achieved state of the art results on EnWiki8 from the Hutter Prize dataset (Hutter, 2012) , with 1,800 hidden units and 10 million parameters.",
"The character LSTM is trained, validated, and tested 6 on the same split of the Linzen et al.",
"(2016) number agreement dataset.",
"A priori, we expect that number agreement is harder for character LSTMs for two reasons.",
"First, character LSTMs lack explicit word representaforms much better for cases with multiple attractors.",
"tions, thus succeeding in this task requires identifying structural dependencies between two sequences of character tokens, while word-based LSTMs only need to resolve dependencies between word tokens.",
"Second, by nature of modeling characters, non-local structural dependencies are sequentially further apart than in the wordbased language model.",
"On the other hand, character LSTMs have the ability to exploit and share informative morphological cues, such as the fact that plural nouns in English tend to end with 's'.",
"As demonstrated on the last row of Table 2 , we find that the character LSTM language model performs much worse at number agreement with multiple attractors compared to its word-based counterparts.",
"This finding is consistent with that of Sennrich (2017) , who find that character-level decoders in neural machine translation perform worse than subword models in capturing morphosyntactic agreement.",
"To some extent, our finding demonstrates the limitations that character LSTMs face in learning structure from language modeling objectives, despite earlier evidence that character LSTM language models are able to implicitly acquire a lexicon (Le Godais et al., 2017) .",
"Number Agreement with RNNGs Given the strong performance of sequential LSTMs in number agreement, is there any further benefit to explicitly modeling hierarchical structures?",
"We focus on recurrent neural network grammars (Dyer et al., 2016, RNNGs) , which jointly model the probability of phrase-structure trees and strings, p(x, y), through structurebuilding actions and explicit compositions for representing completed constituents.",
"Our choice of RNNGs is motivated by the findings of Kuncoro et al.",
"(2017) , who find evidence for syntactic headedness in RNNG phrasal representations.",
"Intuitively, the ability to learn heads is beneficial for this task, as the representation for the noun phrase \"The flowers in the vase\" would be similar to the syntactic head flowers rather than vase.",
"In some sense, the composition operator can be understood as injecting a structural recency bias into the model design, as subjects and verbs that are sequentially apart are encouraged to be close together in the RNNGs' representation.",
"Recurrent Neural Network Grammars RNNGs (Dyer et al., 2016) are language models that estimate the joint probability of string terminals and phrase-structure tree nonterminals.",
"Here we use stack-only RNNGs that achieve better perplexity and parsing performance (Kuncoro et al., 2017) .",
"Given the current stack configuration, the objective function of RNNGs is to predict the correct structure-building operation according to a top-down, left-to-right traversal of the phrasestructure tree; a partial traversal for the input sentence \"The flowers in the vase are blooming\" is illustrated in Fig.",
"3(a) .",
"7 The structural inductive bias of RNNGs derives from an explicit composition operator that represents completed constituents; for instance, the constituent (NP The flowers) is represented by a single composite element on the stack, rather than as four separate symbols.",
"During each REDUCE action, the topmost stack elements that belong to the new constituent are popped from the stack and then composed by the composition function; the composed symbol is then pushed back into the stack.",
"The model is trained in an end-to-end manner by minimizing the cross-entropy loss relative to a sample of gold trees.",
"Experiments Here we summarize the experimental settings of running RNNGs on the number agreement dataset and discuss the empirical findings.",
"Experimental settings.",
"We obtain phrasestructure trees for the Linzen et al.",
"(2016) dataset using a publicly available discriminative model 8 trained on the Penn Treebank (Marcus et al., 1993) .",
"At training time, we use these predicted trees to derive action sequences on the training set, and train the RNNG model on these sequences.",
"9 At test time, we compare the probabilities of the correct and incorrect verb forms given the prefix, which now includes both nonterminal and terminal symbols.",
"An example of the stack contents (i.e.",
"the prefix) when predicting the verb is provided in Fig.",
"3(a) .",
"We similarly run a grid search over the same hyper-parameter range as the sequential LSTM and compare the results with the strongest sequential LSTM baseline from §2.",
"Figure 2 : Number agreement error rates for sequential LSTM language models (left), sequential syntactic LSTM language models (Choe and Charniak, 2016, center), and RNNGs (right).",
"Discussion.",
"Fig.",
"2 shows that RNNGs (rightmost) achieve much better number agreement accuracy compared to LSTM language models (leftmost) for difficult cases with four and five attractors, with around 30% error rate reductions, along with a 13% error rate reduction (from 9% to 7.8%) for three attractors.",
"We attribute the slightly worse performance of RNNGs on cases with zero and one attractor to the presence of intervening structure-building actions that separate the subject and the verb, such as a REDUCE (step 6 in Fig.",
"3(a) ) action to complete the noun phrase and at least one action to predict a verb phrase (step 15 in Fig.",
"3(a) ) before the verb itself is introduced, while LSTM language models benefit from shorter dependencies for zero and one attractor cases.",
"The performance gain of RNNGs might arise from two potential causes.",
"First, RNNGs have access to predicted syntactic annotations, while LSTM language models operate solely on word sequences.",
"Second, RNNGs incorporate explicit compositions, which encourage hierarhical representations and potentially the discovery of syntactic (rather than sequential) dependencies.",
"Would LSTMs that have access to syntactic annotations, but without the explicit composition function, benefit from the same performance gain as RNNGs?",
"To answer this question, we run sequential LSTMs over the same phrase-structure trees (Choe and Charniak, 2016) , similarly estimating the joint probability of phrase-structure nonterminals and string terminals but without an explicit composition operator.",
"Taking the example in Fig.",
"3(a) , the sequential syntactic LSTM would have fifteen 10 symbols on the LSTM when predicting the verb, as opposed to three symbols in the case of RNNGs' stack LSTM.",
"In theory, the sequential LSTM over the phrase-structure trees (Choe and Charniak, 2016) may be able to incorporate a similar, albeit implicit, composition process as RNNGs and consequently derive similarly syntactic heads, although there is no inductive bias that explicitly encourages such process.",
"Fig.",
"2 suggests that the sequential syntactic LSTMs (center) perform comparably with sequential LSTMs without syntax for multiple attractor cases, and worse than RNNGs for nearly all attractors; the gap is highest for multiple attractors.",
"This result showcases the importance of an explicit composition operator and hierarchical representations in identifying structural dependencies, as indicated by number agreement accuracy.",
"Our finding is consistent with the recent work of Yogatama et al.",
"(2018) , who find that introducing elements of hierarchical modeling through a stackstructured memory is beneficial for number agreement, outperforming LSTM language models and attention-augmented variants by increasing margins as the number of attractor grows.",
"Further Analysis In order to better interpret the results, we conduct further analysis into the perplexities of each model, followed by a discussion on the effect of incrementality constraints on the RNNG when predicting number agreement.",
"Perplexity.",
"To what extent does the success of RNNGs in the number agreement task with multiple attractors correlate with better performance under the perplexity metric?",
"We answer this question by using an importance sampling marginalization procedure (Dyer et al., 2016) to obtain an estimate of p(x) under both RNNGs and the sequential syntactic LSTM model.",
"Following Dyer et al.",
"(2016) , for each sentence on the validation set we sample 100 candidate trees from a discriminative model 11 as our proposal distribution.",
"As demonstrated in Table 3 , the LSTM language model has the lowest validation set perplexity despite substantially worse performance than RN-NGs in number agreement with multiple attractors, suggesting that lower perplexity is not neces-sarily correlated with number agreement success.",
"Incrementality constraints.",
"As the syntactic prefix was derived from a discriminative model that has access to unprocessed words, one potential concern is that this prefix might violate the incrementality constraints and benefit the RNNG over the LSTM language model.",
"To address this concern, we remark that the empirical evidence from Fig.",
"2 and Table 3 indicates that the LSTM language model without syntactic annotation outperforms the sequential LSTM with syntactic annotation in terms of both perplexity and number agreement throughout nearly all attractor settings, suggesting that the predicted syntactic prefix does not give any unfair advantage to the syntactic models.",
"Furthermore, we run an experiment where the syntactic prefix is instead derived from an incremental beam search procedure of Fried et al.",
"(2017) .",
"12 To this end, we take the highest scoring beam entry at the time that the verb is generated to be the syntactic prefix; this procedure is applied to both the correct and incorrect verb forms.",
"13 We then similarly compare the probabilities of the correct and incorrect verb form given each respective syntactic prefix to obtain number agreement accuracy.",
"Our finding suggests that using the fully incremental tree prefix leads to even better RNNG number agreement performance for four and five attractors, achieving 7.1% and 8.2% error rates, respectively, compared to 9.4% and 12% for the RNNG error rates in Fig.",
"2 .",
"Top-Down, Left-Corner, and Bottom-Up Traversals In this section, we propose two new variants of RNNGs that construct trees using a different con- 12 As the beam search procedure is time-consuming, we randomly sample 500 cases for each attractor and compute the number agreement accuracy on these samples.",
"13 Consequently, the correct and incorrect forms of the sentence might have different partial trees, as the highest scoring beam entries may be different for each alternative.",
"struction order than the top-down, left-to-right order used above.",
"These are a bottom-up construction order ( §4.1) and a left-corner construction order ( §4.2), analogous to the well-known parsing strategies (e.g.",
"Hale, 2014, chapter 3).",
"They differ from these classic strategies insofar as they do not announce the phrase-structural content of an entire branch at the same time, adopting instead a node-by-node enumeration reminescent of Markov Grammars (Charniak, 1997) .",
"This stepby-step arrangement extends to the derived string as well; since all variants generate words from left to right, the models can be compared using number agreement as a diagnostic.",
"14 Here we state our hypothesis on why the build order matters.",
"The three generation strategies represent different chain rule decompositions of the joint probability of strings and phrase-structure trees, thereby imposing different biases on the learner.",
"Earlier work in parsing has characterized the plausibility of top-down, left-corner, and bottom-up strategies as viable candidates of human sentence processing, especially in terms of memory constraints and human difficulties with center embedding constructions (Johnson-Laird, 1983; Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992 , inter alia), along with neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we cast the three strategies as models of language generation (Manning and Carpenter, 1997), and focus on the empirical question: which generation order has the most appropriate bias in modeling non-local structural dependencies in English?",
"These alternative orders organize the learning problem so as to yield intermediate states in generation that condition on different aspects of the grammatical structure.",
"In number agreement, this amounts to making an agreement controller, such as the word flowers in Fig.",
"3 , more or less salient.",
"If it is more salient, the model should be better-able to inflect the main verb in agreement with this controller, without getting distracted by the attractors.",
"The three proposed build orders are compared in Fig.",
"3 , showing the respective configurations (i.e.",
"the prefix) when generating the main verb in a sentence with a single attractor.",
"15 In ad-dition, we show concrete action sequences for a simpler sentence in each section.",
"Bottom-Up Traversal In bottom-up traversals, phrases are recursively constructed and labeled with the nonterminal type once all their daughters have been built, as illustrated in Fig.",
"4 .",
"Bottom-up traversals benefit from shorter stack depths compared to top-down due to the lack of incomplete nonterminals.",
"As the commitment to label the nonterminal type of a phrase is delayed until its constituents are complete, this means that the generation of a child node cannot condition on the label of its parent node.",
"In n-ary branching trees, bottom-up completion of constituents requires a procedure for determining how many of the most recent elements on the stack should be daughters of the node that is being constructed.",
"16 Conceptually, rather than having a single REDUCE operation as we have before, we have a complex REDUCE(X, n) operation that must determine the type of the constituent (i.e., X) as well as the number of daughters (i.e., n).",
"In step 5 of Fig.",
"4 , the newly formed NP constituent only covers the terminal worms, and neither the unattached terminal eats nor the constituent (NP The fox) is part of the new noun phrase.",
"We implement this extent decision using a stick-breaking construction-using the stack LSTM encoding, a single-layer feedforward network, and a logistic output layer-which decides whether the top element on the stack should be the leftmost child of the new constituent (i.e.",
"whether or not the new constituent is complete after popping the current topmost stack element), as illustrated in Fig.",
"5 .",
"If not, the process is then repeated after the topmost stack element is popped.",
"Once the extent of the new nonterminal has been decided, we parameterize the decision of the nonterminal label type; in Fig.",
"5 this is an NP.",
"A second difference to top-down generation is that when a single constituent remains on the stack, the sentence is not necessarily complete (see step 3 of Fig.",
"4 for examples where this happens).",
"We thus introduce an explicit STOP action (step 8, Fig.",
"4) , indicating the tree is complete, which is only assigned non-zero probability when the stack has a during the history of the full generation process vary considerably in the invariances and the kinds of actions they predict.",
"16 This mechanism is not necessary with strictly binary branching trees, since each new nonterminal always consists of the two children at the top of the stack.",
"Left-Corner Traversal Left-corner traversals combine some aspects of top-down and bottom-up processing.",
"As illustrated in Fig.",
"6 , this works by first generating the leftmost terminal of the tree, The (step 0), before proceeding bottom-up to predict its parent NP (step 1) and then top-down to predict the rest of its children (step 2).",
"A REDUCE action similarly calls the composition operator once the phrase is complete (e.g.",
"step 3).",
"The complete constituent (NP The fox) is the leftmost child of its parent node, thus an NT SW(S) action is done next (step 4).",
"The NT SW(X) action is similar to the NT(X) from the top-down generator, in that it introduces an open nonterminal node and must be matched later by a corresponding REDUCE operation, but, in addition, swaps the two topmost elements at the top of the stack.",
"This is necessary because the parent nonterminal node is not built until after its left-most child has been constructed.",
"In step 1 of Fig.",
"6 , with a single element The on the stack, the action NT SW(NP) adds the open nonterminal symbol NP to become the topmost stack element, but after applying the swap operator the stack now contains (NP | The (step 2).",
"Experiments We optimize the hyper-parameters of each RNNG variant using grid searches based on validation set perplexity.",
"Table 4 summarizes average stack depths and perplexities 17 on the Linzen et al.",
"(2016) validation set.",
"We evaluate each of the variants in terms of number agreement accuracy as an evidence of its suitability to model structural dependencies in English, presented in Table 5 .",
"To account for randomness in training, we report the error rate summary statistics of ten different runs.",
"Avg.",
"(±sdev)/min/max n=2 n=3 n=4 LM 5.8(±0.2)/5.5/6.0 9.6(±0.7)/8.8/10.1 14.1(±1.2)/13.0/15.3 TD 5.5(±0.4)/4.9/5.8 7.8(±0.6)/7.4/8.0 8.9(±1.1)/7.9/9.8 LC 5.4(±0.3)/5.2/5.5 8.2(±0.4)/7.9/8.7 9.9(±1.3)/8.8/11.5 BU 5.7(±0.3) 5.5/5.8 8.5(±0.7)/8.0/9.3 9.7(±1.1)/9.0/11.3 Table 5 : Number agreement error rates for topdown (TD), left-corner (LC), and bottom-up (BU) RNNGs, broken down by the number of attractors.",
"LM indicates the best sequential language model baseline ( §2).",
"We report the mean, standard deviation, and minimum/maximum of 10 different random seeds of each model.",
"Discussion.",
"In Table 5 , we focus on empirical results for cases where the structural dependencies matter the most, corresponding to cases with two, three, and four attractors.",
"All three RNNG variants outperform the sequential LSTM language model baseline for these cases.",
"Nevertheless, the top-down variant outperforms both left-corner and bottom-up strategies for difficult cases with three or more attractors, suggesting that the top-down strategy is most appropriately biased to model difficult number agreement dependencies in English.",
"We run an approximate randomization test by stratifying the output and permuting within each stratum (Yeh, 2000) and find that, for four attractors, the performance difference between the top-down RNNG and the other variants is statistically significant at p < 0.05.",
"The success of the top-down traversal in the domain of number-agreement prediction is consistent with a classical view in parsing that argues top-down parsing is the most human-like parsing strategy since it is the most anticipatory.",
"Only anticipatory representations, it is said, could explain the rapid, incremental processing that humans seem to exhibit (Marslen-Wilson, 1973; Tanenhaus et al., 1995) ; this line of thinking similarly motivates Charniak (2010) , among others.",
"While most work in this domain has been concerned with the parsing problem, our findings suggest that anticipatory mechanisms are also beneficial in capturing structural dependencies in language modeling.",
"We note that our results are achieved using models that, in theory, are able to condition on the entire derivation history, while earlier work in sentence processing has focused on cognitive memory considerations, such as the memory-bounded model of Schuler et al.",
"(2010) .",
"Conclusion Given enough capacity, LSTMs trained on language modeling objectives are able to learn syntax-sensitive dependencies, as evidenced by accurate number agreement accuracy with multiple attractors.",
"Despite this strong performance, we discover explicit modeling of structure does improve the model's ability to discover non-local structural dependencies when determining the distribution over subsequent word generation.",
"Recurrent neural network grammars (RNNGs), which jointly model phrase-structure trees and strings and employ an explicit composition operator, substantially outperform LSTM language models and syntactic language models without explicit compositions; this highlights the importance of a hierarchical inductive bias in capturing structural dependencies.",
"We explore the possibility that how the structure is built affects number agreement performance.",
"Through novel extensions to RNNGs that enable the use of left-corner and bottom-up generation strategies, we discover that this is indeed the case: the three RNNG variants have different generalization properties for number agreement, with the top-down traversal strategy performing best for cases with multiple attractors."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"4.3",
"5"
],
"paper_header_content": [
"Introduction",
"Number Agreement with LSTM Language Models",
"Number Agreement with RNNGs",
"Recurrent Neural Network Grammars",
"Experiments",
"Further Analysis",
"Top-Down, Left-Corner, and Bottom-Up Traversals",
"Bottom-Up Traversal",
"Left-Corner Traversal",
"Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-30#paper-1041#slide-11 | First Part Quick Recap | LSTM language models are able to learn number agreement to a much larger extent than suggested by earlier work.
Independently confirmed by Gulordava et al. (2018).
We further identify model capacity as one of the reasons for the discrepancy.
Model tuning is important.
A strong character LSTM language model performs much worse for number agreement with multiple attractors.
LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | LSTM language models are able to learn number agreement to a much larger extent than suggested by earlier work.
Independently confirmed by Gulordava et al. (2018).
We further identify model capacity as one of the reasons for the discrepancy.
Model tuning is important.
A strong character LSTM language model performs much worse for number agreement with multiple attractors.
LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | [] |
GEM-SciDuet-train-30#paper-1041#slide-12 | 1041 | LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modeling Structure Makes Them Better | Language exhibits hierarchical structure, but recent work using a subject-verb agreement diagnostic argued that state-ofthe-art language models, LSTMs, fail to learn long-range syntax-sensitive dependencies. Using the same diagnostic, we show that, in fact, LSTMs do succeed in learning such dependencies-provided they have enough capacity. We then explore whether models that have access to explicit syntactic information learn agreement more effectively, and how the way in which this structural information is incorporated into the model impacts performance. We find that the mere presence of syntactic information does not improve accuracy, but when model architecture is determined by syntax, number agreement is improved. Further, we find that the choice of how syntactic structure is built affects how well number agreement is learned: top-down construction outperforms leftcorner and bottom-up variants in capturing long-distance structural dependencies. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189
],
"paper_content_text": [
"Introduction Recurrent neural networks (RNNs) are remarkably effective models of sequential data.",
"Recent years have witnessed the widespread adoption of recurrent architectures such as LSTMs (Hochreiter and Schmidhuber, 1997) in various NLP tasks, with state of the art results in language modeling and conditional generation tasks like machine translation (Bahdanau et al., 2015) and text summarization (See et al., 2017) .",
"Here we revisit the question asked by Linzen et al.",
"(2016) : as RNNs model word sequences without explicit notions of hierarchical structure, Figure 1 : An example of the number agreement task with two attractors and a subject-verb distance of five.",
"to what extent are these models able to learn non-local syntactic dependencies in natural language?",
"Identifying number agreement between subjects and verbs-especially in the presence of attractors-can be understood as a cognitivelymotivated probe that seeks to distinguish hierarchical theories from sequential ones, as models that rely on sequential cues like the most recent noun would favor the incorrect verb form.",
"We provide an example of this task in Fig.",
"1 , where the plural form of the verb have agrees with the distant subject parts, rather than the adjacent attractors (underlined) of the singular form.",
"Contrary to the findings of Linzen et al.",
"(2016) , our experiments suggest that sequential LSTMs are able to capture structural dependencies to a large extent, even for cases with multiple attractors ( §2).",
"Our finding suggests that network capacity plays a crucial role in capturing structural dependencies with multiple attractors.",
"Nevertheless, we find that a strong character LSTM language model-which lacks explicit word representation and has to capture much longer sequential dependencies in order to learn non-local structural dependencies effectively-performs much worse in the number agreement task.",
"Given the strong performance of word-based LSTM language models, are there are any substantial benefits, in terms of number agreement accuracy, to explicitly modeling hierarchical structures as an inductive bias?",
"We discover that a certain class of LSTM language models that explicitly models syntactic structures, the recurrent neural network grammars (Dyer et al., 2016, RN-NGs) , considerably outperforms sequential LSTM language models for cases with multiple attractors ( §3).",
"We present experiments affirming that this gain is due to an explicit composition operator rather than the presence of predicted syntactic annotations.",
"Rather surprisingly, syntactic LSTM language models without explicit composition have no advantage over sequential LSTMs that operate on word sequences, although these models can nevertheless be excellent predictors of phrase structures (Choe and Charniak, 2016) .",
"Having established the importance of modeling structures, we explore the hypothesis that how we build the structure affects the model's ability to identify structural dependencies in English.",
"As RNNGs build phrase-structure trees through top-down operations, we propose extensions to the structure-building sequences and model architecture that enable left-corner (Henderson, 2003 (Henderson, , 2004 and bottom-up (Chelba and Jelinek, 2000; Emami and Jelinek, 2005) generation orders ( §4).",
"Extensive prior work has characterized topdown, left-corner, and bottom-up parsing strategies in terms of cognitive plausibility (Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992) and neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we move away from the realm of parsing and evaluate the three strategies as models of generation instead, and address the following empirical question: which generation order is most appropriately biased to model structural dependencies in English, as indicated by number agreement accuracy?",
"Our key finding is that the top-down generation outperforms left-corner and bottom-up variants for difficult cases with multiple attractors.",
"In theory, the three traversal strategies approximate the same chain rule that decompose the joint probability of words and phrase-structure trees, denoted as p(x, y), differently and as such will impose different biases on the learner.",
"In §4.3, we show that the three variants achieve similar perplexities on a held-out validation set.",
"As we observe different patterns in number agreement, this demonstrates that while perplexity can be a useful diagnostic tool, it may not be sensitive enough for comparing models in terms of how well they capture grammatical intuitions.",
"Number Agreement with LSTM Language Models We revisit the number agreement task with LSTMs trained on language modeling objectives, as proposed by Linzen et al.",
"(2016) .",
"Experimental Settings.",
"We use the same parsed Wikipedia corpus, verb inflectors, preprocessing steps, and dataset split as Linzen et al.",
"(2016) .",
"1 Word types beyond the most frequent 10,000 are converted to their respective POS tags.",
"We summarize the corpus statistics of the dataset, along with the test set distribution of the number of attractors, in Table 1 .",
"Similar to Linzen et al.",
"(2016) , we only include test cases where all intervening nouns are of the opposite number forms than the subject noun.",
"All models are implemented using the DyNet library (Neubig et al., 2017) .",
"Training was done using a language modeling objective that predicts the next word given the prefix; at test time we compute agreement error rates by comparing the probability of the correct verb form with the incorrect one.",
"We report performance of a few different LSTM hidden layer configurations, while other hyper-parameters are selected based on a grid search.",
"2 Following Linzen Table 2 indicates that, given enough capacity, LSTM language models without explicit syntactic supervision are able to perform well in number agreement.",
"For cases with multiple attractors, we observe that the LSTM language model with 50 hidden units trails behind its larger counterparts by a substantial margin despite comparable performance for zero attractor cases, suggesting that network capacity plays an especially important role in propagating relevant structural information across a large number of steps.",
"5 Our experiment independently derives the same finding as the recent work of Gulordava et al.",
"(2018) , who also find that LSTMs trained with language modeling objectives are able to learn number agreement well; here we additionally identify model capacity as one of the reasons for the discrepancy with the Linzen et al.",
"(2016) results.",
"While the pretrained large-scale language model of Jozefowicz et al.",
"(2016) has certain advantages in terms of model capacity, more training data, and richer vocabulary, we suspect that the poorer performance is due to differences between their training domain and the number agreement testing domain, although the model still performs reasonably well in the number agreement test set.",
"Prior work has confirmed the notion that, in many cases, statistical models are able to achieve good performance under some aggregate metric by overfitting to patterns that are predictive in most cases, often at the expense of more difficult, infrequent instances that require deeper language understanding abilities (Rimell et al., 2009; Jia and Liang, 2017) .",
"In the vast majority of cases, structural dependencies between subjects and verbs highly overlap with sequential dependencies (Table 1).",
"Nevertheless, the fact that number agreement accuracy gets worse as the number of attractors increases is consistent with a sequential recency bias in LSTMs: under this conjecture, identifying the correct structural dependency becomes harder when there are more adjacent nouns of different number forms than the true subject.",
"If the sequential recency conjecture is correct, then LSTMs would perform worse when the structural dependency is more distant in the sequences, compared to cases where the structural dependency is more adjacent.",
"We empirically test this conjecture by running a strong character-based LSTM language model of that achieved state of the art results on EnWiki8 from the Hutter Prize dataset (Hutter, 2012) , with 1,800 hidden units and 10 million parameters.",
"The character LSTM is trained, validated, and tested 6 on the same split of the Linzen et al.",
"(2016) number agreement dataset.",
"A priori, we expect that number agreement is harder for character LSTMs for two reasons.",
"First, character LSTMs lack explicit word representaforms much better for cases with multiple attractors.",
"tions, thus succeeding in this task requires identifying structural dependencies between two sequences of character tokens, while word-based LSTMs only need to resolve dependencies between word tokens.",
"Second, by nature of modeling characters, non-local structural dependencies are sequentially further apart than in the wordbased language model.",
"On the other hand, character LSTMs have the ability to exploit and share informative morphological cues, such as the fact that plural nouns in English tend to end with 's'.",
"As demonstrated on the last row of Table 2 , we find that the character LSTM language model performs much worse at number agreement with multiple attractors compared to its word-based counterparts.",
"This finding is consistent with that of Sennrich (2017) , who find that character-level decoders in neural machine translation perform worse than subword models in capturing morphosyntactic agreement.",
"To some extent, our finding demonstrates the limitations that character LSTMs face in learning structure from language modeling objectives, despite earlier evidence that character LSTM language models are able to implicitly acquire a lexicon (Le Godais et al., 2017) .",
"Number Agreement with RNNGs Given the strong performance of sequential LSTMs in number agreement, is there any further benefit to explicitly modeling hierarchical structures?",
"We focus on recurrent neural network grammars (Dyer et al., 2016, RNNGs) , which jointly model the probability of phrase-structure trees and strings, p(x, y), through structurebuilding actions and explicit compositions for representing completed constituents.",
"Our choice of RNNGs is motivated by the findings of Kuncoro et al.",
"(2017) , who find evidence for syntactic headedness in RNNG phrasal representations.",
"Intuitively, the ability to learn heads is beneficial for this task, as the representation for the noun phrase \"The flowers in the vase\" would be similar to the syntactic head flowers rather than vase.",
"In some sense, the composition operator can be understood as injecting a structural recency bias into the model design, as subjects and verbs that are sequentially apart are encouraged to be close together in the RNNGs' representation.",
"Recurrent Neural Network Grammars RNNGs (Dyer et al., 2016) are language models that estimate the joint probability of string terminals and phrase-structure tree nonterminals.",
"Here we use stack-only RNNGs that achieve better perplexity and parsing performance (Kuncoro et al., 2017) .",
"Given the current stack configuration, the objective function of RNNGs is to predict the correct structure-building operation according to a top-down, left-to-right traversal of the phrasestructure tree; a partial traversal for the input sentence \"The flowers in the vase are blooming\" is illustrated in Fig.",
"3(a) .",
"7 The structural inductive bias of RNNGs derives from an explicit composition operator that represents completed constituents; for instance, the constituent (NP The flowers) is represented by a single composite element on the stack, rather than as four separate symbols.",
"During each REDUCE action, the topmost stack elements that belong to the new constituent are popped from the stack and then composed by the composition function; the composed symbol is then pushed back into the stack.",
"The model is trained in an end-to-end manner by minimizing the cross-entropy loss relative to a sample of gold trees.",
"Experiments Here we summarize the experimental settings of running RNNGs on the number agreement dataset and discuss the empirical findings.",
"Experimental settings.",
"We obtain phrasestructure trees for the Linzen et al.",
"(2016) dataset using a publicly available discriminative model 8 trained on the Penn Treebank (Marcus et al., 1993) .",
"At training time, we use these predicted trees to derive action sequences on the training set, and train the RNNG model on these sequences.",
"9 At test time, we compare the probabilities of the correct and incorrect verb forms given the prefix, which now includes both nonterminal and terminal symbols.",
"An example of the stack contents (i.e.",
"the prefix) when predicting the verb is provided in Fig.",
"3(a) .",
"We similarly run a grid search over the same hyper-parameter range as the sequential LSTM and compare the results with the strongest sequential LSTM baseline from §2.",
"Figure 2 : Number agreement error rates for sequential LSTM language models (left), sequential syntactic LSTM language models (Choe and Charniak, 2016, center), and RNNGs (right).",
"Discussion.",
"Fig.",
"2 shows that RNNGs (rightmost) achieve much better number agreement accuracy compared to LSTM language models (leftmost) for difficult cases with four and five attractors, with around 30% error rate reductions, along with a 13% error rate reduction (from 9% to 7.8%) for three attractors.",
"We attribute the slightly worse performance of RNNGs on cases with zero and one attractor to the presence of intervening structure-building actions that separate the subject and the verb, such as a REDUCE (step 6 in Fig.",
"3(a) ) action to complete the noun phrase and at least one action to predict a verb phrase (step 15 in Fig.",
"3(a) ) before the verb itself is introduced, while LSTM language models benefit from shorter dependencies for zero and one attractor cases.",
"The performance gain of RNNGs might arise from two potential causes.",
"First, RNNGs have access to predicted syntactic annotations, while LSTM language models operate solely on word sequences.",
"Second, RNNGs incorporate explicit compositions, which encourage hierarhical representations and potentially the discovery of syntactic (rather than sequential) dependencies.",
"Would LSTMs that have access to syntactic annotations, but without the explicit composition function, benefit from the same performance gain as RNNGs?",
"To answer this question, we run sequential LSTMs over the same phrase-structure trees (Choe and Charniak, 2016) , similarly estimating the joint probability of phrase-structure nonterminals and string terminals but without an explicit composition operator.",
"Taking the example in Fig.",
"3(a) , the sequential syntactic LSTM would have fifteen 10 symbols on the LSTM when predicting the verb, as opposed to three symbols in the case of RNNGs' stack LSTM.",
"In theory, the sequential LSTM over the phrase-structure trees (Choe and Charniak, 2016) may be able to incorporate a similar, albeit implicit, composition process as RNNGs and consequently derive similarly syntactic heads, although there is no inductive bias that explicitly encourages such process.",
"Fig.",
"2 suggests that the sequential syntactic LSTMs (center) perform comparably with sequential LSTMs without syntax for multiple attractor cases, and worse than RNNGs for nearly all attractors; the gap is highest for multiple attractors.",
"This result showcases the importance of an explicit composition operator and hierarchical representations in identifying structural dependencies, as indicated by number agreement accuracy.",
"Our finding is consistent with the recent work of Yogatama et al.",
"(2018) , who find that introducing elements of hierarchical modeling through a stackstructured memory is beneficial for number agreement, outperforming LSTM language models and attention-augmented variants by increasing margins as the number of attractor grows.",
"Further Analysis In order to better interpret the results, we conduct further analysis into the perplexities of each model, followed by a discussion on the effect of incrementality constraints on the RNNG when predicting number agreement.",
"Perplexity.",
"To what extent does the success of RNNGs in the number agreement task with multiple attractors correlate with better performance under the perplexity metric?",
"We answer this question by using an importance sampling marginalization procedure (Dyer et al., 2016) to obtain an estimate of p(x) under both RNNGs and the sequential syntactic LSTM model.",
"Following Dyer et al.",
"(2016) , for each sentence on the validation set we sample 100 candidate trees from a discriminative model 11 as our proposal distribution.",
"As demonstrated in Table 3 , the LSTM language model has the lowest validation set perplexity despite substantially worse performance than RN-NGs in number agreement with multiple attractors, suggesting that lower perplexity is not neces-sarily correlated with number agreement success.",
"Incrementality constraints.",
"As the syntactic prefix was derived from a discriminative model that has access to unprocessed words, one potential concern is that this prefix might violate the incrementality constraints and benefit the RNNG over the LSTM language model.",
"To address this concern, we remark that the empirical evidence from Fig.",
"2 and Table 3 indicates that the LSTM language model without syntactic annotation outperforms the sequential LSTM with syntactic annotation in terms of both perplexity and number agreement throughout nearly all attractor settings, suggesting that the predicted syntactic prefix does not give any unfair advantage to the syntactic models.",
"Furthermore, we run an experiment where the syntactic prefix is instead derived from an incremental beam search procedure of Fried et al.",
"(2017) .",
"12 To this end, we take the highest scoring beam entry at the time that the verb is generated to be the syntactic prefix; this procedure is applied to both the correct and incorrect verb forms.",
"13 We then similarly compare the probabilities of the correct and incorrect verb form given each respective syntactic prefix to obtain number agreement accuracy.",
"Our finding suggests that using the fully incremental tree prefix leads to even better RNNG number agreement performance for four and five attractors, achieving 7.1% and 8.2% error rates, respectively, compared to 9.4% and 12% for the RNNG error rates in Fig.",
"2 .",
"Top-Down, Left-Corner, and Bottom-Up Traversals In this section, we propose two new variants of RNNGs that construct trees using a different con- 12 As the beam search procedure is time-consuming, we randomly sample 500 cases for each attractor and compute the number agreement accuracy on these samples.",
"13 Consequently, the correct and incorrect forms of the sentence might have different partial trees, as the highest scoring beam entries may be different for each alternative.",
"struction order than the top-down, left-to-right order used above.",
"These are a bottom-up construction order ( §4.1) and a left-corner construction order ( §4.2), analogous to the well-known parsing strategies (e.g.",
"Hale, 2014, chapter 3).",
"They differ from these classic strategies insofar as they do not announce the phrase-structural content of an entire branch at the same time, adopting instead a node-by-node enumeration reminescent of Markov Grammars (Charniak, 1997) .",
"This stepby-step arrangement extends to the derived string as well; since all variants generate words from left to right, the models can be compared using number agreement as a diagnostic.",
"14 Here we state our hypothesis on why the build order matters.",
"The three generation strategies represent different chain rule decompositions of the joint probability of strings and phrase-structure trees, thereby imposing different biases on the learner.",
"Earlier work in parsing has characterized the plausibility of top-down, left-corner, and bottom-up strategies as viable candidates of human sentence processing, especially in terms of memory constraints and human difficulties with center embedding constructions (Johnson-Laird, 1983; Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992 , inter alia), along with neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we cast the three strategies as models of language generation (Manning and Carpenter, 1997), and focus on the empirical question: which generation order has the most appropriate bias in modeling non-local structural dependencies in English?",
"These alternative orders organize the learning problem so as to yield intermediate states in generation that condition on different aspects of the grammatical structure.",
"In number agreement, this amounts to making an agreement controller, such as the word flowers in Fig.",
"3 , more or less salient.",
"If it is more salient, the model should be better-able to inflect the main verb in agreement with this controller, without getting distracted by the attractors.",
"The three proposed build orders are compared in Fig.",
"3 , showing the respective configurations (i.e.",
"the prefix) when generating the main verb in a sentence with a single attractor.",
"15 In ad-dition, we show concrete action sequences for a simpler sentence in each section.",
"Bottom-Up Traversal In bottom-up traversals, phrases are recursively constructed and labeled with the nonterminal type once all their daughters have been built, as illustrated in Fig.",
"4 .",
"Bottom-up traversals benefit from shorter stack depths compared to top-down due to the lack of incomplete nonterminals.",
"As the commitment to label the nonterminal type of a phrase is delayed until its constituents are complete, this means that the generation of a child node cannot condition on the label of its parent node.",
"In n-ary branching trees, bottom-up completion of constituents requires a procedure for determining how many of the most recent elements on the stack should be daughters of the node that is being constructed.",
"16 Conceptually, rather than having a single REDUCE operation as we have before, we have a complex REDUCE(X, n) operation that must determine the type of the constituent (i.e., X) as well as the number of daughters (i.e., n).",
"In step 5 of Fig.",
"4 , the newly formed NP constituent only covers the terminal worms, and neither the unattached terminal eats nor the constituent (NP The fox) is part of the new noun phrase.",
"We implement this extent decision using a stick-breaking construction-using the stack LSTM encoding, a single-layer feedforward network, and a logistic output layer-which decides whether the top element on the stack should be the leftmost child of the new constituent (i.e.",
"whether or not the new constituent is complete after popping the current topmost stack element), as illustrated in Fig.",
"5 .",
"If not, the process is then repeated after the topmost stack element is popped.",
"Once the extent of the new nonterminal has been decided, we parameterize the decision of the nonterminal label type; in Fig.",
"5 this is an NP.",
"A second difference to top-down generation is that when a single constituent remains on the stack, the sentence is not necessarily complete (see step 3 of Fig.",
"4 for examples where this happens).",
"We thus introduce an explicit STOP action (step 8, Fig.",
"4) , indicating the tree is complete, which is only assigned non-zero probability when the stack has a during the history of the full generation process vary considerably in the invariances and the kinds of actions they predict.",
"16 This mechanism is not necessary with strictly binary branching trees, since each new nonterminal always consists of the two children at the top of the stack.",
"Left-Corner Traversal Left-corner traversals combine some aspects of top-down and bottom-up processing.",
"As illustrated in Fig.",
"6 , this works by first generating the leftmost terminal of the tree, The (step 0), before proceeding bottom-up to predict its parent NP (step 1) and then top-down to predict the rest of its children (step 2).",
"A REDUCE action similarly calls the composition operator once the phrase is complete (e.g.",
"step 3).",
"The complete constituent (NP The fox) is the leftmost child of its parent node, thus an NT SW(S) action is done next (step 4).",
"The NT SW(X) action is similar to the NT(X) from the top-down generator, in that it introduces an open nonterminal node and must be matched later by a corresponding REDUCE operation, but, in addition, swaps the two topmost elements at the top of the stack.",
"This is necessary because the parent nonterminal node is not built until after its left-most child has been constructed.",
"In step 1 of Fig.",
"6 , with a single element The on the stack, the action NT SW(NP) adds the open nonterminal symbol NP to become the topmost stack element, but after applying the swap operator the stack now contains (NP | The (step 2).",
"Experiments We optimize the hyper-parameters of each RNNG variant using grid searches based on validation set perplexity.",
"Table 4 summarizes average stack depths and perplexities 17 on the Linzen et al.",
"(2016) validation set.",
"We evaluate each of the variants in terms of number agreement accuracy as an evidence of its suitability to model structural dependencies in English, presented in Table 5 .",
"To account for randomness in training, we report the error rate summary statistics of ten different runs.",
"Avg.",
"(±sdev)/min/max n=2 n=3 n=4 LM 5.8(±0.2)/5.5/6.0 9.6(±0.7)/8.8/10.1 14.1(±1.2)/13.0/15.3 TD 5.5(±0.4)/4.9/5.8 7.8(±0.6)/7.4/8.0 8.9(±1.1)/7.9/9.8 LC 5.4(±0.3)/5.2/5.5 8.2(±0.4)/7.9/8.7 9.9(±1.3)/8.8/11.5 BU 5.7(±0.3) 5.5/5.8 8.5(±0.7)/8.0/9.3 9.7(±1.1)/9.0/11.3 Table 5 : Number agreement error rates for topdown (TD), left-corner (LC), and bottom-up (BU) RNNGs, broken down by the number of attractors.",
"LM indicates the best sequential language model baseline ( §2).",
"We report the mean, standard deviation, and minimum/maximum of 10 different random seeds of each model.",
"Discussion.",
"In Table 5 , we focus on empirical results for cases where the structural dependencies matter the most, corresponding to cases with two, three, and four attractors.",
"All three RNNG variants outperform the sequential LSTM language model baseline for these cases.",
"Nevertheless, the top-down variant outperforms both left-corner and bottom-up strategies for difficult cases with three or more attractors, suggesting that the top-down strategy is most appropriately biased to model difficult number agreement dependencies in English.",
"We run an approximate randomization test by stratifying the output and permuting within each stratum (Yeh, 2000) and find that, for four attractors, the performance difference between the top-down RNNG and the other variants is statistically significant at p < 0.05.",
"The success of the top-down traversal in the domain of number-agreement prediction is consistent with a classical view in parsing that argues top-down parsing is the most human-like parsing strategy since it is the most anticipatory.",
"Only anticipatory representations, it is said, could explain the rapid, incremental processing that humans seem to exhibit (Marslen-Wilson, 1973; Tanenhaus et al., 1995) ; this line of thinking similarly motivates Charniak (2010) , among others.",
"While most work in this domain has been concerned with the parsing problem, our findings suggest that anticipatory mechanisms are also beneficial in capturing structural dependencies in language modeling.",
"We note that our results are achieved using models that, in theory, are able to condition on the entire derivation history, while earlier work in sentence processing has focused on cognitive memory considerations, such as the memory-bounded model of Schuler et al.",
"(2010) .",
"Conclusion Given enough capacity, LSTMs trained on language modeling objectives are able to learn syntax-sensitive dependencies, as evidenced by accurate number agreement accuracy with multiple attractors.",
"Despite this strong performance, we discover explicit modeling of structure does improve the model's ability to discover non-local structural dependencies when determining the distribution over subsequent word generation.",
"Recurrent neural network grammars (RNNGs), which jointly model phrase-structure trees and strings and employ an explicit composition operator, substantially outperform LSTM language models and syntactic language models without explicit compositions; this highlights the importance of a hierarchical inductive bias in capturing structural dependencies.",
"We explore the possibility that how the structure is built affects number agreement performance.",
"Through novel extensions to RNNGs that enable the use of left-corner and bottom-up generation strategies, we discover that this is indeed the case: the three RNNG variants have different generalization properties for number agreement, with the top-down traversal strategy performing best for cases with multiple attractors."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"4.3",
"5"
],
"paper_header_content": [
"Introduction",
"Number Agreement with LSTM Language Models",
"Number Agreement with RNNGs",
"Recurrent Neural Network Grammars",
"Experiments",
"Further Analysis",
"Top-Down, Left-Corner, and Bottom-Up Traversals",
"Bottom-Up Traversal",
"Left-Corner Traversal",
"Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-30#paper-1041#slide-12 | Two Ways of Modelling Sentences | LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | [] |
GEM-SciDuet-train-30#paper-1041#slide-13 | 1041 | LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modeling Structure Makes Them Better | Language exhibits hierarchical structure, but recent work using a subject-verb agreement diagnostic argued that state-ofthe-art language models, LSTMs, fail to learn long-range syntax-sensitive dependencies. Using the same diagnostic, we show that, in fact, LSTMs do succeed in learning such dependencies-provided they have enough capacity. We then explore whether models that have access to explicit syntactic information learn agreement more effectively, and how the way in which this structural information is incorporated into the model impacts performance. We find that the mere presence of syntactic information does not improve accuracy, but when model architecture is determined by syntax, number agreement is improved. Further, we find that the choice of how syntactic structure is built affects how well number agreement is learned: top-down construction outperforms leftcorner and bottom-up variants in capturing long-distance structural dependencies. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189
],
"paper_content_text": [
"Introduction Recurrent neural networks (RNNs) are remarkably effective models of sequential data.",
"Recent years have witnessed the widespread adoption of recurrent architectures such as LSTMs (Hochreiter and Schmidhuber, 1997) in various NLP tasks, with state of the art results in language modeling and conditional generation tasks like machine translation (Bahdanau et al., 2015) and text summarization (See et al., 2017) .",
"Here we revisit the question asked by Linzen et al.",
"(2016) : as RNNs model word sequences without explicit notions of hierarchical structure, Figure 1 : An example of the number agreement task with two attractors and a subject-verb distance of five.",
"to what extent are these models able to learn non-local syntactic dependencies in natural language?",
"Identifying number agreement between subjects and verbs-especially in the presence of attractors-can be understood as a cognitivelymotivated probe that seeks to distinguish hierarchical theories from sequential ones, as models that rely on sequential cues like the most recent noun would favor the incorrect verb form.",
"We provide an example of this task in Fig.",
"1 , where the plural form of the verb have agrees with the distant subject parts, rather than the adjacent attractors (underlined) of the singular form.",
"Contrary to the findings of Linzen et al.",
"(2016) , our experiments suggest that sequential LSTMs are able to capture structural dependencies to a large extent, even for cases with multiple attractors ( §2).",
"Our finding suggests that network capacity plays a crucial role in capturing structural dependencies with multiple attractors.",
"Nevertheless, we find that a strong character LSTM language model-which lacks explicit word representation and has to capture much longer sequential dependencies in order to learn non-local structural dependencies effectively-performs much worse in the number agreement task.",
"Given the strong performance of word-based LSTM language models, are there are any substantial benefits, in terms of number agreement accuracy, to explicitly modeling hierarchical structures as an inductive bias?",
"We discover that a certain class of LSTM language models that explicitly models syntactic structures, the recurrent neural network grammars (Dyer et al., 2016, RN-NGs) , considerably outperforms sequential LSTM language models for cases with multiple attractors ( §3).",
"We present experiments affirming that this gain is due to an explicit composition operator rather than the presence of predicted syntactic annotations.",
"Rather surprisingly, syntactic LSTM language models without explicit composition have no advantage over sequential LSTMs that operate on word sequences, although these models can nevertheless be excellent predictors of phrase structures (Choe and Charniak, 2016) .",
"Having established the importance of modeling structures, we explore the hypothesis that how we build the structure affects the model's ability to identify structural dependencies in English.",
"As RNNGs build phrase-structure trees through top-down operations, we propose extensions to the structure-building sequences and model architecture that enable left-corner (Henderson, 2003 (Henderson, , 2004 and bottom-up (Chelba and Jelinek, 2000; Emami and Jelinek, 2005) generation orders ( §4).",
"Extensive prior work has characterized topdown, left-corner, and bottom-up parsing strategies in terms of cognitive plausibility (Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992) and neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we move away from the realm of parsing and evaluate the three strategies as models of generation instead, and address the following empirical question: which generation order is most appropriately biased to model structural dependencies in English, as indicated by number agreement accuracy?",
"Our key finding is that the top-down generation outperforms left-corner and bottom-up variants for difficult cases with multiple attractors.",
"In theory, the three traversal strategies approximate the same chain rule that decompose the joint probability of words and phrase-structure trees, denoted as p(x, y), differently and as such will impose different biases on the learner.",
"In §4.3, we show that the three variants achieve similar perplexities on a held-out validation set.",
"As we observe different patterns in number agreement, this demonstrates that while perplexity can be a useful diagnostic tool, it may not be sensitive enough for comparing models in terms of how well they capture grammatical intuitions.",
"Number Agreement with LSTM Language Models We revisit the number agreement task with LSTMs trained on language modeling objectives, as proposed by Linzen et al.",
"(2016) .",
"Experimental Settings.",
"We use the same parsed Wikipedia corpus, verb inflectors, preprocessing steps, and dataset split as Linzen et al.",
"(2016) .",
"1 Word types beyond the most frequent 10,000 are converted to their respective POS tags.",
"We summarize the corpus statistics of the dataset, along with the test set distribution of the number of attractors, in Table 1 .",
"Similar to Linzen et al.",
"(2016) , we only include test cases where all intervening nouns are of the opposite number forms than the subject noun.",
"All models are implemented using the DyNet library (Neubig et al., 2017) .",
"Training was done using a language modeling objective that predicts the next word given the prefix; at test time we compute agreement error rates by comparing the probability of the correct verb form with the incorrect one.",
"We report performance of a few different LSTM hidden layer configurations, while other hyper-parameters are selected based on a grid search.",
"2 Following Linzen Table 2 indicates that, given enough capacity, LSTM language models without explicit syntactic supervision are able to perform well in number agreement.",
"For cases with multiple attractors, we observe that the LSTM language model with 50 hidden units trails behind its larger counterparts by a substantial margin despite comparable performance for zero attractor cases, suggesting that network capacity plays an especially important role in propagating relevant structural information across a large number of steps.",
"5 Our experiment independently derives the same finding as the recent work of Gulordava et al.",
"(2018) , who also find that LSTMs trained with language modeling objectives are able to learn number agreement well; here we additionally identify model capacity as one of the reasons for the discrepancy with the Linzen et al.",
"(2016) results.",
"While the pretrained large-scale language model of Jozefowicz et al.",
"(2016) has certain advantages in terms of model capacity, more training data, and richer vocabulary, we suspect that the poorer performance is due to differences between their training domain and the number agreement testing domain, although the model still performs reasonably well in the number agreement test set.",
"Prior work has confirmed the notion that, in many cases, statistical models are able to achieve good performance under some aggregate metric by overfitting to patterns that are predictive in most cases, often at the expense of more difficult, infrequent instances that require deeper language understanding abilities (Rimell et al., 2009; Jia and Liang, 2017) .",
"In the vast majority of cases, structural dependencies between subjects and verbs highly overlap with sequential dependencies (Table 1).",
"Nevertheless, the fact that number agreement accuracy gets worse as the number of attractors increases is consistent with a sequential recency bias in LSTMs: under this conjecture, identifying the correct structural dependency becomes harder when there are more adjacent nouns of different number forms than the true subject.",
"If the sequential recency conjecture is correct, then LSTMs would perform worse when the structural dependency is more distant in the sequences, compared to cases where the structural dependency is more adjacent.",
"We empirically test this conjecture by running a strong character-based LSTM language model of that achieved state of the art results on EnWiki8 from the Hutter Prize dataset (Hutter, 2012) , with 1,800 hidden units and 10 million parameters.",
"The character LSTM is trained, validated, and tested 6 on the same split of the Linzen et al.",
"(2016) number agreement dataset.",
"A priori, we expect that number agreement is harder for character LSTMs for two reasons.",
"First, character LSTMs lack explicit word representaforms much better for cases with multiple attractors.",
"tions, thus succeeding in this task requires identifying structural dependencies between two sequences of character tokens, while word-based LSTMs only need to resolve dependencies between word tokens.",
"Second, by nature of modeling characters, non-local structural dependencies are sequentially further apart than in the wordbased language model.",
"On the other hand, character LSTMs have the ability to exploit and share informative morphological cues, such as the fact that plural nouns in English tend to end with 's'.",
"As demonstrated on the last row of Table 2 , we find that the character LSTM language model performs much worse at number agreement with multiple attractors compared to its word-based counterparts.",
"This finding is consistent with that of Sennrich (2017) , who find that character-level decoders in neural machine translation perform worse than subword models in capturing morphosyntactic agreement.",
"To some extent, our finding demonstrates the limitations that character LSTMs face in learning structure from language modeling objectives, despite earlier evidence that character LSTM language models are able to implicitly acquire a lexicon (Le Godais et al., 2017) .",
"Number Agreement with RNNGs Given the strong performance of sequential LSTMs in number agreement, is there any further benefit to explicitly modeling hierarchical structures?",
"We focus on recurrent neural network grammars (Dyer et al., 2016, RNNGs) , which jointly model the probability of phrase-structure trees and strings, p(x, y), through structurebuilding actions and explicit compositions for representing completed constituents.",
"Our choice of RNNGs is motivated by the findings of Kuncoro et al.",
"(2017) , who find evidence for syntactic headedness in RNNG phrasal representations.",
"Intuitively, the ability to learn heads is beneficial for this task, as the representation for the noun phrase \"The flowers in the vase\" would be similar to the syntactic head flowers rather than vase.",
"In some sense, the composition operator can be understood as injecting a structural recency bias into the model design, as subjects and verbs that are sequentially apart are encouraged to be close together in the RNNGs' representation.",
"Recurrent Neural Network Grammars RNNGs (Dyer et al., 2016) are language models that estimate the joint probability of string terminals and phrase-structure tree nonterminals.",
"Here we use stack-only RNNGs that achieve better perplexity and parsing performance (Kuncoro et al., 2017) .",
"Given the current stack configuration, the objective function of RNNGs is to predict the correct structure-building operation according to a top-down, left-to-right traversal of the phrasestructure tree; a partial traversal for the input sentence \"The flowers in the vase are blooming\" is illustrated in Fig.",
"3(a) .",
"7 The structural inductive bias of RNNGs derives from an explicit composition operator that represents completed constituents; for instance, the constituent (NP The flowers) is represented by a single composite element on the stack, rather than as four separate symbols.",
"During each REDUCE action, the topmost stack elements that belong to the new constituent are popped from the stack and then composed by the composition function; the composed symbol is then pushed back into the stack.",
"The model is trained in an end-to-end manner by minimizing the cross-entropy loss relative to a sample of gold trees.",
"Experiments Here we summarize the experimental settings of running RNNGs on the number agreement dataset and discuss the empirical findings.",
"Experimental settings.",
"We obtain phrasestructure trees for the Linzen et al.",
"(2016) dataset using a publicly available discriminative model 8 trained on the Penn Treebank (Marcus et al., 1993) .",
"At training time, we use these predicted trees to derive action sequences on the training set, and train the RNNG model on these sequences.",
"9 At test time, we compare the probabilities of the correct and incorrect verb forms given the prefix, which now includes both nonterminal and terminal symbols.",
"An example of the stack contents (i.e.",
"the prefix) when predicting the verb is provided in Fig.",
"3(a) .",
"We similarly run a grid search over the same hyper-parameter range as the sequential LSTM and compare the results with the strongest sequential LSTM baseline from §2.",
"Figure 2 : Number agreement error rates for sequential LSTM language models (left), sequential syntactic LSTM language models (Choe and Charniak, 2016, center), and RNNGs (right).",
"Discussion.",
"Fig.",
"2 shows that RNNGs (rightmost) achieve much better number agreement accuracy compared to LSTM language models (leftmost) for difficult cases with four and five attractors, with around 30% error rate reductions, along with a 13% error rate reduction (from 9% to 7.8%) for three attractors.",
"We attribute the slightly worse performance of RNNGs on cases with zero and one attractor to the presence of intervening structure-building actions that separate the subject and the verb, such as a REDUCE (step 6 in Fig.",
"3(a) ) action to complete the noun phrase and at least one action to predict a verb phrase (step 15 in Fig.",
"3(a) ) before the verb itself is introduced, while LSTM language models benefit from shorter dependencies for zero and one attractor cases.",
"The performance gain of RNNGs might arise from two potential causes.",
"First, RNNGs have access to predicted syntactic annotations, while LSTM language models operate solely on word sequences.",
"Second, RNNGs incorporate explicit compositions, which encourage hierarhical representations and potentially the discovery of syntactic (rather than sequential) dependencies.",
"Would LSTMs that have access to syntactic annotations, but without the explicit composition function, benefit from the same performance gain as RNNGs?",
"To answer this question, we run sequential LSTMs over the same phrase-structure trees (Choe and Charniak, 2016) , similarly estimating the joint probability of phrase-structure nonterminals and string terminals but without an explicit composition operator.",
"Taking the example in Fig.",
"3(a) , the sequential syntactic LSTM would have fifteen 10 symbols on the LSTM when predicting the verb, as opposed to three symbols in the case of RNNGs' stack LSTM.",
"In theory, the sequential LSTM over the phrase-structure trees (Choe and Charniak, 2016) may be able to incorporate a similar, albeit implicit, composition process as RNNGs and consequently derive similarly syntactic heads, although there is no inductive bias that explicitly encourages such process.",
"Fig.",
"2 suggests that the sequential syntactic LSTMs (center) perform comparably with sequential LSTMs without syntax for multiple attractor cases, and worse than RNNGs for nearly all attractors; the gap is highest for multiple attractors.",
"This result showcases the importance of an explicit composition operator and hierarchical representations in identifying structural dependencies, as indicated by number agreement accuracy.",
"Our finding is consistent with the recent work of Yogatama et al.",
"(2018) , who find that introducing elements of hierarchical modeling through a stackstructured memory is beneficial for number agreement, outperforming LSTM language models and attention-augmented variants by increasing margins as the number of attractor grows.",
"Further Analysis In order to better interpret the results, we conduct further analysis into the perplexities of each model, followed by a discussion on the effect of incrementality constraints on the RNNG when predicting number agreement.",
"Perplexity.",
"To what extent does the success of RNNGs in the number agreement task with multiple attractors correlate with better performance under the perplexity metric?",
"We answer this question by using an importance sampling marginalization procedure (Dyer et al., 2016) to obtain an estimate of p(x) under both RNNGs and the sequential syntactic LSTM model.",
"Following Dyer et al.",
"(2016) , for each sentence on the validation set we sample 100 candidate trees from a discriminative model 11 as our proposal distribution.",
"As demonstrated in Table 3 , the LSTM language model has the lowest validation set perplexity despite substantially worse performance than RN-NGs in number agreement with multiple attractors, suggesting that lower perplexity is not neces-sarily correlated with number agreement success.",
"Incrementality constraints.",
"As the syntactic prefix was derived from a discriminative model that has access to unprocessed words, one potential concern is that this prefix might violate the incrementality constraints and benefit the RNNG over the LSTM language model.",
"To address this concern, we remark that the empirical evidence from Fig.",
"2 and Table 3 indicates that the LSTM language model without syntactic annotation outperforms the sequential LSTM with syntactic annotation in terms of both perplexity and number agreement throughout nearly all attractor settings, suggesting that the predicted syntactic prefix does not give any unfair advantage to the syntactic models.",
"Furthermore, we run an experiment where the syntactic prefix is instead derived from an incremental beam search procedure of Fried et al.",
"(2017) .",
"12 To this end, we take the highest scoring beam entry at the time that the verb is generated to be the syntactic prefix; this procedure is applied to both the correct and incorrect verb forms.",
"13 We then similarly compare the probabilities of the correct and incorrect verb form given each respective syntactic prefix to obtain number agreement accuracy.",
"Our finding suggests that using the fully incremental tree prefix leads to even better RNNG number agreement performance for four and five attractors, achieving 7.1% and 8.2% error rates, respectively, compared to 9.4% and 12% for the RNNG error rates in Fig.",
"2 .",
"Top-Down, Left-Corner, and Bottom-Up Traversals In this section, we propose two new variants of RNNGs that construct trees using a different con- 12 As the beam search procedure is time-consuming, we randomly sample 500 cases for each attractor and compute the number agreement accuracy on these samples.",
"13 Consequently, the correct and incorrect forms of the sentence might have different partial trees, as the highest scoring beam entries may be different for each alternative.",
"struction order than the top-down, left-to-right order used above.",
"These are a bottom-up construction order ( §4.1) and a left-corner construction order ( §4.2), analogous to the well-known parsing strategies (e.g.",
"Hale, 2014, chapter 3).",
"They differ from these classic strategies insofar as they do not announce the phrase-structural content of an entire branch at the same time, adopting instead a node-by-node enumeration reminescent of Markov Grammars (Charniak, 1997) .",
"This stepby-step arrangement extends to the derived string as well; since all variants generate words from left to right, the models can be compared using number agreement as a diagnostic.",
"14 Here we state our hypothesis on why the build order matters.",
"The three generation strategies represent different chain rule decompositions of the joint probability of strings and phrase-structure trees, thereby imposing different biases on the learner.",
"Earlier work in parsing has characterized the plausibility of top-down, left-corner, and bottom-up strategies as viable candidates of human sentence processing, especially in terms of memory constraints and human difficulties with center embedding constructions (Johnson-Laird, 1983; Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992 , inter alia), along with neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we cast the three strategies as models of language generation (Manning and Carpenter, 1997), and focus on the empirical question: which generation order has the most appropriate bias in modeling non-local structural dependencies in English?",
"These alternative orders organize the learning problem so as to yield intermediate states in generation that condition on different aspects of the grammatical structure.",
"In number agreement, this amounts to making an agreement controller, such as the word flowers in Fig.",
"3 , more or less salient.",
"If it is more salient, the model should be better-able to inflect the main verb in agreement with this controller, without getting distracted by the attractors.",
"The three proposed build orders are compared in Fig.",
"3 , showing the respective configurations (i.e.",
"the prefix) when generating the main verb in a sentence with a single attractor.",
"15 In ad-dition, we show concrete action sequences for a simpler sentence in each section.",
"Bottom-Up Traversal In bottom-up traversals, phrases are recursively constructed and labeled with the nonterminal type once all their daughters have been built, as illustrated in Fig.",
"4 .",
"Bottom-up traversals benefit from shorter stack depths compared to top-down due to the lack of incomplete nonterminals.",
"As the commitment to label the nonterminal type of a phrase is delayed until its constituents are complete, this means that the generation of a child node cannot condition on the label of its parent node.",
"In n-ary branching trees, bottom-up completion of constituents requires a procedure for determining how many of the most recent elements on the stack should be daughters of the node that is being constructed.",
"16 Conceptually, rather than having a single REDUCE operation as we have before, we have a complex REDUCE(X, n) operation that must determine the type of the constituent (i.e., X) as well as the number of daughters (i.e., n).",
"In step 5 of Fig.",
"4 , the newly formed NP constituent only covers the terminal worms, and neither the unattached terminal eats nor the constituent (NP The fox) is part of the new noun phrase.",
"We implement this extent decision using a stick-breaking construction-using the stack LSTM encoding, a single-layer feedforward network, and a logistic output layer-which decides whether the top element on the stack should be the leftmost child of the new constituent (i.e.",
"whether or not the new constituent is complete after popping the current topmost stack element), as illustrated in Fig.",
"5 .",
"If not, the process is then repeated after the topmost stack element is popped.",
"Once the extent of the new nonterminal has been decided, we parameterize the decision of the nonterminal label type; in Fig.",
"5 this is an NP.",
"A second difference to top-down generation is that when a single constituent remains on the stack, the sentence is not necessarily complete (see step 3 of Fig.",
"4 for examples where this happens).",
"We thus introduce an explicit STOP action (step 8, Fig.",
"4) , indicating the tree is complete, which is only assigned non-zero probability when the stack has a during the history of the full generation process vary considerably in the invariances and the kinds of actions they predict.",
"16 This mechanism is not necessary with strictly binary branching trees, since each new nonterminal always consists of the two children at the top of the stack.",
"Left-Corner Traversal Left-corner traversals combine some aspects of top-down and bottom-up processing.",
"As illustrated in Fig.",
"6 , this works by first generating the leftmost terminal of the tree, The (step 0), before proceeding bottom-up to predict its parent NP (step 1) and then top-down to predict the rest of its children (step 2).",
"A REDUCE action similarly calls the composition operator once the phrase is complete (e.g.",
"step 3).",
"The complete constituent (NP The fox) is the leftmost child of its parent node, thus an NT SW(S) action is done next (step 4).",
"The NT SW(X) action is similar to the NT(X) from the top-down generator, in that it introduces an open nonterminal node and must be matched later by a corresponding REDUCE operation, but, in addition, swaps the two topmost elements at the top of the stack.",
"This is necessary because the parent nonterminal node is not built until after its left-most child has been constructed.",
"In step 1 of Fig.",
"6 , with a single element The on the stack, the action NT SW(NP) adds the open nonterminal symbol NP to become the topmost stack element, but after applying the swap operator the stack now contains (NP | The (step 2).",
"Experiments We optimize the hyper-parameters of each RNNG variant using grid searches based on validation set perplexity.",
"Table 4 summarizes average stack depths and perplexities 17 on the Linzen et al.",
"(2016) validation set.",
"We evaluate each of the variants in terms of number agreement accuracy as an evidence of its suitability to model structural dependencies in English, presented in Table 5 .",
"To account for randomness in training, we report the error rate summary statistics of ten different runs.",
"Avg.",
"(±sdev)/min/max n=2 n=3 n=4 LM 5.8(±0.2)/5.5/6.0 9.6(±0.7)/8.8/10.1 14.1(±1.2)/13.0/15.3 TD 5.5(±0.4)/4.9/5.8 7.8(±0.6)/7.4/8.0 8.9(±1.1)/7.9/9.8 LC 5.4(±0.3)/5.2/5.5 8.2(±0.4)/7.9/8.7 9.9(±1.3)/8.8/11.5 BU 5.7(±0.3) 5.5/5.8 8.5(±0.7)/8.0/9.3 9.7(±1.1)/9.0/11.3 Table 5 : Number agreement error rates for topdown (TD), left-corner (LC), and bottom-up (BU) RNNGs, broken down by the number of attractors.",
"LM indicates the best sequential language model baseline ( §2).",
"We report the mean, standard deviation, and minimum/maximum of 10 different random seeds of each model.",
"Discussion.",
"In Table 5 , we focus on empirical results for cases where the structural dependencies matter the most, corresponding to cases with two, three, and four attractors.",
"All three RNNG variants outperform the sequential LSTM language model baseline for these cases.",
"Nevertheless, the top-down variant outperforms both left-corner and bottom-up strategies for difficult cases with three or more attractors, suggesting that the top-down strategy is most appropriately biased to model difficult number agreement dependencies in English.",
"We run an approximate randomization test by stratifying the output and permuting within each stratum (Yeh, 2000) and find that, for four attractors, the performance difference between the top-down RNNG and the other variants is statistically significant at p < 0.05.",
"The success of the top-down traversal in the domain of number-agreement prediction is consistent with a classical view in parsing that argues top-down parsing is the most human-like parsing strategy since it is the most anticipatory.",
"Only anticipatory representations, it is said, could explain the rapid, incremental processing that humans seem to exhibit (Marslen-Wilson, 1973; Tanenhaus et al., 1995) ; this line of thinking similarly motivates Charniak (2010) , among others.",
"While most work in this domain has been concerned with the parsing problem, our findings suggest that anticipatory mechanisms are also beneficial in capturing structural dependencies in language modeling.",
"We note that our results are achieved using models that, in theory, are able to condition on the entire derivation history, while earlier work in sentence processing has focused on cognitive memory considerations, such as the memory-bounded model of Schuler et al.",
"(2010) .",
"Conclusion Given enough capacity, LSTMs trained on language modeling objectives are able to learn syntax-sensitive dependencies, as evidenced by accurate number agreement accuracy with multiple attractors.",
"Despite this strong performance, we discover explicit modeling of structure does improve the model's ability to discover non-local structural dependencies when determining the distribution over subsequent word generation.",
"Recurrent neural network grammars (RNNGs), which jointly model phrase-structure trees and strings and employ an explicit composition operator, substantially outperform LSTM language models and syntactic language models without explicit compositions; this highlights the importance of a hierarchical inductive bias in capturing structural dependencies.",
"We explore the possibility that how the structure is built affects number agreement performance.",
"Through novel extensions to RNNGs that enable the use of left-corner and bottom-up generation strategies, we discover that this is indeed the case: the three RNNG variants have different generalization properties for number agreement, with the top-down traversal strategy performing best for cases with multiple attractors."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"4.3",
"5"
],
"paper_header_content": [
"Introduction",
"Number Agreement with LSTM Language Models",
"Number Agreement with RNNGs",
"Recurrent Neural Network Grammars",
"Experiments",
"Further Analysis",
"Top-Down, Left-Corner, and Bottom-Up Traversals",
"Bottom-Up Traversal",
"Left-Corner Traversal",
"Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-30#paper-1041#slide-13 | Three Concrete Alternatives for Modeling Sentences | Sequential LSTMs without Syntax
Sequential LSTMs with Syntax (Choe and Charniak, 2016)
RNNG (Dyer et al., 2016) Hier archical inductive bias
LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | Sequential LSTMs without Syntax
Sequential LSTMs with Syntax (Choe and Charniak, 2016)
RNNG (Dyer et al., 2016) Hier archical inductive bias
LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | [] |
GEM-SciDuet-train-30#paper-1041#slide-14 | 1041 | LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modeling Structure Makes Them Better | Language exhibits hierarchical structure, but recent work using a subject-verb agreement diagnostic argued that state-ofthe-art language models, LSTMs, fail to learn long-range syntax-sensitive dependencies. Using the same diagnostic, we show that, in fact, LSTMs do succeed in learning such dependencies-provided they have enough capacity. We then explore whether models that have access to explicit syntactic information learn agreement more effectively, and how the way in which this structural information is incorporated into the model impacts performance. We find that the mere presence of syntactic information does not improve accuracy, but when model architecture is determined by syntax, number agreement is improved. Further, we find that the choice of how syntactic structure is built affects how well number agreement is learned: top-down construction outperforms leftcorner and bottom-up variants in capturing long-distance structural dependencies. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189
],
"paper_content_text": [
"Introduction Recurrent neural networks (RNNs) are remarkably effective models of sequential data.",
"Recent years have witnessed the widespread adoption of recurrent architectures such as LSTMs (Hochreiter and Schmidhuber, 1997) in various NLP tasks, with state of the art results in language modeling and conditional generation tasks like machine translation (Bahdanau et al., 2015) and text summarization (See et al., 2017) .",
"Here we revisit the question asked by Linzen et al.",
"(2016) : as RNNs model word sequences without explicit notions of hierarchical structure, Figure 1 : An example of the number agreement task with two attractors and a subject-verb distance of five.",
"to what extent are these models able to learn non-local syntactic dependencies in natural language?",
"Identifying number agreement between subjects and verbs-especially in the presence of attractors-can be understood as a cognitivelymotivated probe that seeks to distinguish hierarchical theories from sequential ones, as models that rely on sequential cues like the most recent noun would favor the incorrect verb form.",
"We provide an example of this task in Fig.",
"1 , where the plural form of the verb have agrees with the distant subject parts, rather than the adjacent attractors (underlined) of the singular form.",
"Contrary to the findings of Linzen et al.",
"(2016) , our experiments suggest that sequential LSTMs are able to capture structural dependencies to a large extent, even for cases with multiple attractors ( §2).",
"Our finding suggests that network capacity plays a crucial role in capturing structural dependencies with multiple attractors.",
"Nevertheless, we find that a strong character LSTM language model-which lacks explicit word representation and has to capture much longer sequential dependencies in order to learn non-local structural dependencies effectively-performs much worse in the number agreement task.",
"Given the strong performance of word-based LSTM language models, are there are any substantial benefits, in terms of number agreement accuracy, to explicitly modeling hierarchical structures as an inductive bias?",
"We discover that a certain class of LSTM language models that explicitly models syntactic structures, the recurrent neural network grammars (Dyer et al., 2016, RN-NGs) , considerably outperforms sequential LSTM language models for cases with multiple attractors ( §3).",
"We present experiments affirming that this gain is due to an explicit composition operator rather than the presence of predicted syntactic annotations.",
"Rather surprisingly, syntactic LSTM language models without explicit composition have no advantage over sequential LSTMs that operate on word sequences, although these models can nevertheless be excellent predictors of phrase structures (Choe and Charniak, 2016) .",
"Having established the importance of modeling structures, we explore the hypothesis that how we build the structure affects the model's ability to identify structural dependencies in English.",
"As RNNGs build phrase-structure trees through top-down operations, we propose extensions to the structure-building sequences and model architecture that enable left-corner (Henderson, 2003 (Henderson, , 2004 and bottom-up (Chelba and Jelinek, 2000; Emami and Jelinek, 2005) generation orders ( §4).",
"Extensive prior work has characterized topdown, left-corner, and bottom-up parsing strategies in terms of cognitive plausibility (Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992) and neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we move away from the realm of parsing and evaluate the three strategies as models of generation instead, and address the following empirical question: which generation order is most appropriately biased to model structural dependencies in English, as indicated by number agreement accuracy?",
"Our key finding is that the top-down generation outperforms left-corner and bottom-up variants for difficult cases with multiple attractors.",
"In theory, the three traversal strategies approximate the same chain rule that decompose the joint probability of words and phrase-structure trees, denoted as p(x, y), differently and as such will impose different biases on the learner.",
"In §4.3, we show that the three variants achieve similar perplexities on a held-out validation set.",
"As we observe different patterns in number agreement, this demonstrates that while perplexity can be a useful diagnostic tool, it may not be sensitive enough for comparing models in terms of how well they capture grammatical intuitions.",
"Number Agreement with LSTM Language Models We revisit the number agreement task with LSTMs trained on language modeling objectives, as proposed by Linzen et al.",
"(2016) .",
"Experimental Settings.",
"We use the same parsed Wikipedia corpus, verb inflectors, preprocessing steps, and dataset split as Linzen et al.",
"(2016) .",
"1 Word types beyond the most frequent 10,000 are converted to their respective POS tags.",
"We summarize the corpus statistics of the dataset, along with the test set distribution of the number of attractors, in Table 1 .",
"Similar to Linzen et al.",
"(2016) , we only include test cases where all intervening nouns are of the opposite number forms than the subject noun.",
"All models are implemented using the DyNet library (Neubig et al., 2017) .",
"Training was done using a language modeling objective that predicts the next word given the prefix; at test time we compute agreement error rates by comparing the probability of the correct verb form with the incorrect one.",
"We report performance of a few different LSTM hidden layer configurations, while other hyper-parameters are selected based on a grid search.",
"2 Following Linzen Table 2 indicates that, given enough capacity, LSTM language models without explicit syntactic supervision are able to perform well in number agreement.",
"For cases with multiple attractors, we observe that the LSTM language model with 50 hidden units trails behind its larger counterparts by a substantial margin despite comparable performance for zero attractor cases, suggesting that network capacity plays an especially important role in propagating relevant structural information across a large number of steps.",
"5 Our experiment independently derives the same finding as the recent work of Gulordava et al.",
"(2018) , who also find that LSTMs trained with language modeling objectives are able to learn number agreement well; here we additionally identify model capacity as one of the reasons for the discrepancy with the Linzen et al.",
"(2016) results.",
"While the pretrained large-scale language model of Jozefowicz et al.",
"(2016) has certain advantages in terms of model capacity, more training data, and richer vocabulary, we suspect that the poorer performance is due to differences between their training domain and the number agreement testing domain, although the model still performs reasonably well in the number agreement test set.",
"Prior work has confirmed the notion that, in many cases, statistical models are able to achieve good performance under some aggregate metric by overfitting to patterns that are predictive in most cases, often at the expense of more difficult, infrequent instances that require deeper language understanding abilities (Rimell et al., 2009; Jia and Liang, 2017) .",
"In the vast majority of cases, structural dependencies between subjects and verbs highly overlap with sequential dependencies (Table 1).",
"Nevertheless, the fact that number agreement accuracy gets worse as the number of attractors increases is consistent with a sequential recency bias in LSTMs: under this conjecture, identifying the correct structural dependency becomes harder when there are more adjacent nouns of different number forms than the true subject.",
"If the sequential recency conjecture is correct, then LSTMs would perform worse when the structural dependency is more distant in the sequences, compared to cases where the structural dependency is more adjacent.",
"We empirically test this conjecture by running a strong character-based LSTM language model of that achieved state of the art results on EnWiki8 from the Hutter Prize dataset (Hutter, 2012) , with 1,800 hidden units and 10 million parameters.",
"The character LSTM is trained, validated, and tested 6 on the same split of the Linzen et al.",
"(2016) number agreement dataset.",
"A priori, we expect that number agreement is harder for character LSTMs for two reasons.",
"First, character LSTMs lack explicit word representaforms much better for cases with multiple attractors.",
"tions, thus succeeding in this task requires identifying structural dependencies between two sequences of character tokens, while word-based LSTMs only need to resolve dependencies between word tokens.",
"Second, by nature of modeling characters, non-local structural dependencies are sequentially further apart than in the wordbased language model.",
"On the other hand, character LSTMs have the ability to exploit and share informative morphological cues, such as the fact that plural nouns in English tend to end with 's'.",
"As demonstrated on the last row of Table 2 , we find that the character LSTM language model performs much worse at number agreement with multiple attractors compared to its word-based counterparts.",
"This finding is consistent with that of Sennrich (2017) , who find that character-level decoders in neural machine translation perform worse than subword models in capturing morphosyntactic agreement.",
"To some extent, our finding demonstrates the limitations that character LSTMs face in learning structure from language modeling objectives, despite earlier evidence that character LSTM language models are able to implicitly acquire a lexicon (Le Godais et al., 2017) .",
"Number Agreement with RNNGs Given the strong performance of sequential LSTMs in number agreement, is there any further benefit to explicitly modeling hierarchical structures?",
"We focus on recurrent neural network grammars (Dyer et al., 2016, RNNGs) , which jointly model the probability of phrase-structure trees and strings, p(x, y), through structurebuilding actions and explicit compositions for representing completed constituents.",
"Our choice of RNNGs is motivated by the findings of Kuncoro et al.",
"(2017) , who find evidence for syntactic headedness in RNNG phrasal representations.",
"Intuitively, the ability to learn heads is beneficial for this task, as the representation for the noun phrase \"The flowers in the vase\" would be similar to the syntactic head flowers rather than vase.",
"In some sense, the composition operator can be understood as injecting a structural recency bias into the model design, as subjects and verbs that are sequentially apart are encouraged to be close together in the RNNGs' representation.",
"Recurrent Neural Network Grammars RNNGs (Dyer et al., 2016) are language models that estimate the joint probability of string terminals and phrase-structure tree nonterminals.",
"Here we use stack-only RNNGs that achieve better perplexity and parsing performance (Kuncoro et al., 2017) .",
"Given the current stack configuration, the objective function of RNNGs is to predict the correct structure-building operation according to a top-down, left-to-right traversal of the phrasestructure tree; a partial traversal for the input sentence \"The flowers in the vase are blooming\" is illustrated in Fig.",
"3(a) .",
"7 The structural inductive bias of RNNGs derives from an explicit composition operator that represents completed constituents; for instance, the constituent (NP The flowers) is represented by a single composite element on the stack, rather than as four separate symbols.",
"During each REDUCE action, the topmost stack elements that belong to the new constituent are popped from the stack and then composed by the composition function; the composed symbol is then pushed back into the stack.",
"The model is trained in an end-to-end manner by minimizing the cross-entropy loss relative to a sample of gold trees.",
"Experiments Here we summarize the experimental settings of running RNNGs on the number agreement dataset and discuss the empirical findings.",
"Experimental settings.",
"We obtain phrasestructure trees for the Linzen et al.",
"(2016) dataset using a publicly available discriminative model 8 trained on the Penn Treebank (Marcus et al., 1993) .",
"At training time, we use these predicted trees to derive action sequences on the training set, and train the RNNG model on these sequences.",
"9 At test time, we compare the probabilities of the correct and incorrect verb forms given the prefix, which now includes both nonterminal and terminal symbols.",
"An example of the stack contents (i.e.",
"the prefix) when predicting the verb is provided in Fig.",
"3(a) .",
"We similarly run a grid search over the same hyper-parameter range as the sequential LSTM and compare the results with the strongest sequential LSTM baseline from §2.",
"Figure 2 : Number agreement error rates for sequential LSTM language models (left), sequential syntactic LSTM language models (Choe and Charniak, 2016, center), and RNNGs (right).",
"Discussion.",
"Fig.",
"2 shows that RNNGs (rightmost) achieve much better number agreement accuracy compared to LSTM language models (leftmost) for difficult cases with four and five attractors, with around 30% error rate reductions, along with a 13% error rate reduction (from 9% to 7.8%) for three attractors.",
"We attribute the slightly worse performance of RNNGs on cases with zero and one attractor to the presence of intervening structure-building actions that separate the subject and the verb, such as a REDUCE (step 6 in Fig.",
"3(a) ) action to complete the noun phrase and at least one action to predict a verb phrase (step 15 in Fig.",
"3(a) ) before the verb itself is introduced, while LSTM language models benefit from shorter dependencies for zero and one attractor cases.",
"The performance gain of RNNGs might arise from two potential causes.",
"First, RNNGs have access to predicted syntactic annotations, while LSTM language models operate solely on word sequences.",
"Second, RNNGs incorporate explicit compositions, which encourage hierarhical representations and potentially the discovery of syntactic (rather than sequential) dependencies.",
"Would LSTMs that have access to syntactic annotations, but without the explicit composition function, benefit from the same performance gain as RNNGs?",
"To answer this question, we run sequential LSTMs over the same phrase-structure trees (Choe and Charniak, 2016) , similarly estimating the joint probability of phrase-structure nonterminals and string terminals but without an explicit composition operator.",
"Taking the example in Fig.",
"3(a) , the sequential syntactic LSTM would have fifteen 10 symbols on the LSTM when predicting the verb, as opposed to three symbols in the case of RNNGs' stack LSTM.",
"In theory, the sequential LSTM over the phrase-structure trees (Choe and Charniak, 2016) may be able to incorporate a similar, albeit implicit, composition process as RNNGs and consequently derive similarly syntactic heads, although there is no inductive bias that explicitly encourages such process.",
"Fig.",
"2 suggests that the sequential syntactic LSTMs (center) perform comparably with sequential LSTMs without syntax for multiple attractor cases, and worse than RNNGs for nearly all attractors; the gap is highest for multiple attractors.",
"This result showcases the importance of an explicit composition operator and hierarchical representations in identifying structural dependencies, as indicated by number agreement accuracy.",
"Our finding is consistent with the recent work of Yogatama et al.",
"(2018) , who find that introducing elements of hierarchical modeling through a stackstructured memory is beneficial for number agreement, outperforming LSTM language models and attention-augmented variants by increasing margins as the number of attractor grows.",
"Further Analysis In order to better interpret the results, we conduct further analysis into the perplexities of each model, followed by a discussion on the effect of incrementality constraints on the RNNG when predicting number agreement.",
"Perplexity.",
"To what extent does the success of RNNGs in the number agreement task with multiple attractors correlate with better performance under the perplexity metric?",
"We answer this question by using an importance sampling marginalization procedure (Dyer et al., 2016) to obtain an estimate of p(x) under both RNNGs and the sequential syntactic LSTM model.",
"Following Dyer et al.",
"(2016) , for each sentence on the validation set we sample 100 candidate trees from a discriminative model 11 as our proposal distribution.",
"As demonstrated in Table 3 , the LSTM language model has the lowest validation set perplexity despite substantially worse performance than RN-NGs in number agreement with multiple attractors, suggesting that lower perplexity is not neces-sarily correlated with number agreement success.",
"Incrementality constraints.",
"As the syntactic prefix was derived from a discriminative model that has access to unprocessed words, one potential concern is that this prefix might violate the incrementality constraints and benefit the RNNG over the LSTM language model.",
"To address this concern, we remark that the empirical evidence from Fig.",
"2 and Table 3 indicates that the LSTM language model without syntactic annotation outperforms the sequential LSTM with syntactic annotation in terms of both perplexity and number agreement throughout nearly all attractor settings, suggesting that the predicted syntactic prefix does not give any unfair advantage to the syntactic models.",
"Furthermore, we run an experiment where the syntactic prefix is instead derived from an incremental beam search procedure of Fried et al.",
"(2017) .",
"12 To this end, we take the highest scoring beam entry at the time that the verb is generated to be the syntactic prefix; this procedure is applied to both the correct and incorrect verb forms.",
"13 We then similarly compare the probabilities of the correct and incorrect verb form given each respective syntactic prefix to obtain number agreement accuracy.",
"Our finding suggests that using the fully incremental tree prefix leads to even better RNNG number agreement performance for four and five attractors, achieving 7.1% and 8.2% error rates, respectively, compared to 9.4% and 12% for the RNNG error rates in Fig.",
"2 .",
"Top-Down, Left-Corner, and Bottom-Up Traversals In this section, we propose two new variants of RNNGs that construct trees using a different con- 12 As the beam search procedure is time-consuming, we randomly sample 500 cases for each attractor and compute the number agreement accuracy on these samples.",
"13 Consequently, the correct and incorrect forms of the sentence might have different partial trees, as the highest scoring beam entries may be different for each alternative.",
"struction order than the top-down, left-to-right order used above.",
"These are a bottom-up construction order ( §4.1) and a left-corner construction order ( §4.2), analogous to the well-known parsing strategies (e.g.",
"Hale, 2014, chapter 3).",
"They differ from these classic strategies insofar as they do not announce the phrase-structural content of an entire branch at the same time, adopting instead a node-by-node enumeration reminescent of Markov Grammars (Charniak, 1997) .",
"This stepby-step arrangement extends to the derived string as well; since all variants generate words from left to right, the models can be compared using number agreement as a diagnostic.",
"14 Here we state our hypothesis on why the build order matters.",
"The three generation strategies represent different chain rule decompositions of the joint probability of strings and phrase-structure trees, thereby imposing different biases on the learner.",
"Earlier work in parsing has characterized the plausibility of top-down, left-corner, and bottom-up strategies as viable candidates of human sentence processing, especially in terms of memory constraints and human difficulties with center embedding constructions (Johnson-Laird, 1983; Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992 , inter alia), along with neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we cast the three strategies as models of language generation (Manning and Carpenter, 1997), and focus on the empirical question: which generation order has the most appropriate bias in modeling non-local structural dependencies in English?",
"These alternative orders organize the learning problem so as to yield intermediate states in generation that condition on different aspects of the grammatical structure.",
"In number agreement, this amounts to making an agreement controller, such as the word flowers in Fig.",
"3 , more or less salient.",
"If it is more salient, the model should be better-able to inflect the main verb in agreement with this controller, without getting distracted by the attractors.",
"The three proposed build orders are compared in Fig.",
"3 , showing the respective configurations (i.e.",
"the prefix) when generating the main verb in a sentence with a single attractor.",
"15 In ad-dition, we show concrete action sequences for a simpler sentence in each section.",
"Bottom-Up Traversal In bottom-up traversals, phrases are recursively constructed and labeled with the nonterminal type once all their daughters have been built, as illustrated in Fig.",
"4 .",
"Bottom-up traversals benefit from shorter stack depths compared to top-down due to the lack of incomplete nonterminals.",
"As the commitment to label the nonterminal type of a phrase is delayed until its constituents are complete, this means that the generation of a child node cannot condition on the label of its parent node.",
"In n-ary branching trees, bottom-up completion of constituents requires a procedure for determining how many of the most recent elements on the stack should be daughters of the node that is being constructed.",
"16 Conceptually, rather than having a single REDUCE operation as we have before, we have a complex REDUCE(X, n) operation that must determine the type of the constituent (i.e., X) as well as the number of daughters (i.e., n).",
"In step 5 of Fig.",
"4 , the newly formed NP constituent only covers the terminal worms, and neither the unattached terminal eats nor the constituent (NP The fox) is part of the new noun phrase.",
"We implement this extent decision using a stick-breaking construction-using the stack LSTM encoding, a single-layer feedforward network, and a logistic output layer-which decides whether the top element on the stack should be the leftmost child of the new constituent (i.e.",
"whether or not the new constituent is complete after popping the current topmost stack element), as illustrated in Fig.",
"5 .",
"If not, the process is then repeated after the topmost stack element is popped.",
"Once the extent of the new nonterminal has been decided, we parameterize the decision of the nonterminal label type; in Fig.",
"5 this is an NP.",
"A second difference to top-down generation is that when a single constituent remains on the stack, the sentence is not necessarily complete (see step 3 of Fig.",
"4 for examples where this happens).",
"We thus introduce an explicit STOP action (step 8, Fig.",
"4) , indicating the tree is complete, which is only assigned non-zero probability when the stack has a during the history of the full generation process vary considerably in the invariances and the kinds of actions they predict.",
"16 This mechanism is not necessary with strictly binary branching trees, since each new nonterminal always consists of the two children at the top of the stack.",
"Left-Corner Traversal Left-corner traversals combine some aspects of top-down and bottom-up processing.",
"As illustrated in Fig.",
"6 , this works by first generating the leftmost terminal of the tree, The (step 0), before proceeding bottom-up to predict its parent NP (step 1) and then top-down to predict the rest of its children (step 2).",
"A REDUCE action similarly calls the composition operator once the phrase is complete (e.g.",
"step 3).",
"The complete constituent (NP The fox) is the leftmost child of its parent node, thus an NT SW(S) action is done next (step 4).",
"The NT SW(X) action is similar to the NT(X) from the top-down generator, in that it introduces an open nonterminal node and must be matched later by a corresponding REDUCE operation, but, in addition, swaps the two topmost elements at the top of the stack.",
"This is necessary because the parent nonterminal node is not built until after its left-most child has been constructed.",
"In step 1 of Fig.",
"6 , with a single element The on the stack, the action NT SW(NP) adds the open nonterminal symbol NP to become the topmost stack element, but after applying the swap operator the stack now contains (NP | The (step 2).",
"Experiments We optimize the hyper-parameters of each RNNG variant using grid searches based on validation set perplexity.",
"Table 4 summarizes average stack depths and perplexities 17 on the Linzen et al.",
"(2016) validation set.",
"We evaluate each of the variants in terms of number agreement accuracy as an evidence of its suitability to model structural dependencies in English, presented in Table 5 .",
"To account for randomness in training, we report the error rate summary statistics of ten different runs.",
"Avg.",
"(±sdev)/min/max n=2 n=3 n=4 LM 5.8(±0.2)/5.5/6.0 9.6(±0.7)/8.8/10.1 14.1(±1.2)/13.0/15.3 TD 5.5(±0.4)/4.9/5.8 7.8(±0.6)/7.4/8.0 8.9(±1.1)/7.9/9.8 LC 5.4(±0.3)/5.2/5.5 8.2(±0.4)/7.9/8.7 9.9(±1.3)/8.8/11.5 BU 5.7(±0.3) 5.5/5.8 8.5(±0.7)/8.0/9.3 9.7(±1.1)/9.0/11.3 Table 5 : Number agreement error rates for topdown (TD), left-corner (LC), and bottom-up (BU) RNNGs, broken down by the number of attractors.",
"LM indicates the best sequential language model baseline ( §2).",
"We report the mean, standard deviation, and minimum/maximum of 10 different random seeds of each model.",
"Discussion.",
"In Table 5 , we focus on empirical results for cases where the structural dependencies matter the most, corresponding to cases with two, three, and four attractors.",
"All three RNNG variants outperform the sequential LSTM language model baseline for these cases.",
"Nevertheless, the top-down variant outperforms both left-corner and bottom-up strategies for difficult cases with three or more attractors, suggesting that the top-down strategy is most appropriately biased to model difficult number agreement dependencies in English.",
"We run an approximate randomization test by stratifying the output and permuting within each stratum (Yeh, 2000) and find that, for four attractors, the performance difference between the top-down RNNG and the other variants is statistically significant at p < 0.05.",
"The success of the top-down traversal in the domain of number-agreement prediction is consistent with a classical view in parsing that argues top-down parsing is the most human-like parsing strategy since it is the most anticipatory.",
"Only anticipatory representations, it is said, could explain the rapid, incremental processing that humans seem to exhibit (Marslen-Wilson, 1973; Tanenhaus et al., 1995) ; this line of thinking similarly motivates Charniak (2010) , among others.",
"While most work in this domain has been concerned with the parsing problem, our findings suggest that anticipatory mechanisms are also beneficial in capturing structural dependencies in language modeling.",
"We note that our results are achieved using models that, in theory, are able to condition on the entire derivation history, while earlier work in sentence processing has focused on cognitive memory considerations, such as the memory-bounded model of Schuler et al.",
"(2010) .",
"Conclusion Given enough capacity, LSTMs trained on language modeling objectives are able to learn syntax-sensitive dependencies, as evidenced by accurate number agreement accuracy with multiple attractors.",
"Despite this strong performance, we discover explicit modeling of structure does improve the model's ability to discover non-local structural dependencies when determining the distribution over subsequent word generation.",
"Recurrent neural network grammars (RNNGs), which jointly model phrase-structure trees and strings and employ an explicit composition operator, substantially outperform LSTM language models and syntactic language models without explicit compositions; this highlights the importance of a hierarchical inductive bias in capturing structural dependencies.",
"We explore the possibility that how the structure is built affects number agreement performance.",
"Through novel extensions to RNNGs that enable the use of left-corner and bottom-up generation strategies, we discover that this is indeed the case: the three RNNG variants have different generalization properties for number agreement, with the top-down traversal strategy performing best for cases with multiple attractors."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"4.3",
"5"
],
"paper_header_content": [
"Introduction",
"Number Agreement with LSTM Language Models",
"Number Agreement with RNNGs",
"Recurrent Neural Network Grammars",
"Experiments",
"Further Analysis",
"Top-Down, Left-Corner, and Bottom-Up Traversals",
"Bottom-Up Traversal",
"Left-Corner Traversal",
"Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-30#paper-1041#slide-14 | Evidence of Headedness in the Composition Function | Kuncoro et al. (2017) found evidence of syntactic headedness in RNNGs
The discovery of syntactic heads would be useful for number agreement
Inspection of composed representation through the attention weights
LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | Kuncoro et al. (2017) found evidence of syntactic headedness in RNNGs
The discovery of syntactic heads would be useful for number agreement
Inspection of composed representation through the attention weights
LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | [] |
GEM-SciDuet-train-30#paper-1041#slide-15 | 1041 | LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modeling Structure Makes Them Better | Language exhibits hierarchical structure, but recent work using a subject-verb agreement diagnostic argued that state-ofthe-art language models, LSTMs, fail to learn long-range syntax-sensitive dependencies. Using the same diagnostic, we show that, in fact, LSTMs do succeed in learning such dependencies-provided they have enough capacity. We then explore whether models that have access to explicit syntactic information learn agreement more effectively, and how the way in which this structural information is incorporated into the model impacts performance. We find that the mere presence of syntactic information does not improve accuracy, but when model architecture is determined by syntax, number agreement is improved. Further, we find that the choice of how syntactic structure is built affects how well number agreement is learned: top-down construction outperforms leftcorner and bottom-up variants in capturing long-distance structural dependencies. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189
],
"paper_content_text": [
"Introduction Recurrent neural networks (RNNs) are remarkably effective models of sequential data.",
"Recent years have witnessed the widespread adoption of recurrent architectures such as LSTMs (Hochreiter and Schmidhuber, 1997) in various NLP tasks, with state of the art results in language modeling and conditional generation tasks like machine translation (Bahdanau et al., 2015) and text summarization (See et al., 2017) .",
"Here we revisit the question asked by Linzen et al.",
"(2016) : as RNNs model word sequences without explicit notions of hierarchical structure, Figure 1 : An example of the number agreement task with two attractors and a subject-verb distance of five.",
"to what extent are these models able to learn non-local syntactic dependencies in natural language?",
"Identifying number agreement between subjects and verbs-especially in the presence of attractors-can be understood as a cognitivelymotivated probe that seeks to distinguish hierarchical theories from sequential ones, as models that rely on sequential cues like the most recent noun would favor the incorrect verb form.",
"We provide an example of this task in Fig.",
"1 , where the plural form of the verb have agrees with the distant subject parts, rather than the adjacent attractors (underlined) of the singular form.",
"Contrary to the findings of Linzen et al.",
"(2016) , our experiments suggest that sequential LSTMs are able to capture structural dependencies to a large extent, even for cases with multiple attractors ( §2).",
"Our finding suggests that network capacity plays a crucial role in capturing structural dependencies with multiple attractors.",
"Nevertheless, we find that a strong character LSTM language model-which lacks explicit word representation and has to capture much longer sequential dependencies in order to learn non-local structural dependencies effectively-performs much worse in the number agreement task.",
"Given the strong performance of word-based LSTM language models, are there are any substantial benefits, in terms of number agreement accuracy, to explicitly modeling hierarchical structures as an inductive bias?",
"We discover that a certain class of LSTM language models that explicitly models syntactic structures, the recurrent neural network grammars (Dyer et al., 2016, RN-NGs) , considerably outperforms sequential LSTM language models for cases with multiple attractors ( §3).",
"We present experiments affirming that this gain is due to an explicit composition operator rather than the presence of predicted syntactic annotations.",
"Rather surprisingly, syntactic LSTM language models without explicit composition have no advantage over sequential LSTMs that operate on word sequences, although these models can nevertheless be excellent predictors of phrase structures (Choe and Charniak, 2016) .",
"Having established the importance of modeling structures, we explore the hypothesis that how we build the structure affects the model's ability to identify structural dependencies in English.",
"As RNNGs build phrase-structure trees through top-down operations, we propose extensions to the structure-building sequences and model architecture that enable left-corner (Henderson, 2003 (Henderson, , 2004 and bottom-up (Chelba and Jelinek, 2000; Emami and Jelinek, 2005) generation orders ( §4).",
"Extensive prior work has characterized topdown, left-corner, and bottom-up parsing strategies in terms of cognitive plausibility (Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992) and neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we move away from the realm of parsing and evaluate the three strategies as models of generation instead, and address the following empirical question: which generation order is most appropriately biased to model structural dependencies in English, as indicated by number agreement accuracy?",
"Our key finding is that the top-down generation outperforms left-corner and bottom-up variants for difficult cases with multiple attractors.",
"In theory, the three traversal strategies approximate the same chain rule that decompose the joint probability of words and phrase-structure trees, denoted as p(x, y), differently and as such will impose different biases on the learner.",
"In §4.3, we show that the three variants achieve similar perplexities on a held-out validation set.",
"As we observe different patterns in number agreement, this demonstrates that while perplexity can be a useful diagnostic tool, it may not be sensitive enough for comparing models in terms of how well they capture grammatical intuitions.",
"Number Agreement with LSTM Language Models We revisit the number agreement task with LSTMs trained on language modeling objectives, as proposed by Linzen et al.",
"(2016) .",
"Experimental Settings.",
"We use the same parsed Wikipedia corpus, verb inflectors, preprocessing steps, and dataset split as Linzen et al.",
"(2016) .",
"1 Word types beyond the most frequent 10,000 are converted to their respective POS tags.",
"We summarize the corpus statistics of the dataset, along with the test set distribution of the number of attractors, in Table 1 .",
"Similar to Linzen et al.",
"(2016) , we only include test cases where all intervening nouns are of the opposite number forms than the subject noun.",
"All models are implemented using the DyNet library (Neubig et al., 2017) .",
"Training was done using a language modeling objective that predicts the next word given the prefix; at test time we compute agreement error rates by comparing the probability of the correct verb form with the incorrect one.",
"We report performance of a few different LSTM hidden layer configurations, while other hyper-parameters are selected based on a grid search.",
"2 Following Linzen Table 2 indicates that, given enough capacity, LSTM language models without explicit syntactic supervision are able to perform well in number agreement.",
"For cases with multiple attractors, we observe that the LSTM language model with 50 hidden units trails behind its larger counterparts by a substantial margin despite comparable performance for zero attractor cases, suggesting that network capacity plays an especially important role in propagating relevant structural information across a large number of steps.",
"5 Our experiment independently derives the same finding as the recent work of Gulordava et al.",
"(2018) , who also find that LSTMs trained with language modeling objectives are able to learn number agreement well; here we additionally identify model capacity as one of the reasons for the discrepancy with the Linzen et al.",
"(2016) results.",
"While the pretrained large-scale language model of Jozefowicz et al.",
"(2016) has certain advantages in terms of model capacity, more training data, and richer vocabulary, we suspect that the poorer performance is due to differences between their training domain and the number agreement testing domain, although the model still performs reasonably well in the number agreement test set.",
"Prior work has confirmed the notion that, in many cases, statistical models are able to achieve good performance under some aggregate metric by overfitting to patterns that are predictive in most cases, often at the expense of more difficult, infrequent instances that require deeper language understanding abilities (Rimell et al., 2009; Jia and Liang, 2017) .",
"In the vast majority of cases, structural dependencies between subjects and verbs highly overlap with sequential dependencies (Table 1).",
"Nevertheless, the fact that number agreement accuracy gets worse as the number of attractors increases is consistent with a sequential recency bias in LSTMs: under this conjecture, identifying the correct structural dependency becomes harder when there are more adjacent nouns of different number forms than the true subject.",
"If the sequential recency conjecture is correct, then LSTMs would perform worse when the structural dependency is more distant in the sequences, compared to cases where the structural dependency is more adjacent.",
"We empirically test this conjecture by running a strong character-based LSTM language model of that achieved state of the art results on EnWiki8 from the Hutter Prize dataset (Hutter, 2012) , with 1,800 hidden units and 10 million parameters.",
"The character LSTM is trained, validated, and tested 6 on the same split of the Linzen et al.",
"(2016) number agreement dataset.",
"A priori, we expect that number agreement is harder for character LSTMs for two reasons.",
"First, character LSTMs lack explicit word representaforms much better for cases with multiple attractors.",
"tions, thus succeeding in this task requires identifying structural dependencies between two sequences of character tokens, while word-based LSTMs only need to resolve dependencies between word tokens.",
"Second, by nature of modeling characters, non-local structural dependencies are sequentially further apart than in the wordbased language model.",
"On the other hand, character LSTMs have the ability to exploit and share informative morphological cues, such as the fact that plural nouns in English tend to end with 's'.",
"As demonstrated on the last row of Table 2 , we find that the character LSTM language model performs much worse at number agreement with multiple attractors compared to its word-based counterparts.",
"This finding is consistent with that of Sennrich (2017) , who find that character-level decoders in neural machine translation perform worse than subword models in capturing morphosyntactic agreement.",
"To some extent, our finding demonstrates the limitations that character LSTMs face in learning structure from language modeling objectives, despite earlier evidence that character LSTM language models are able to implicitly acquire a lexicon (Le Godais et al., 2017) .",
"Number Agreement with RNNGs Given the strong performance of sequential LSTMs in number agreement, is there any further benefit to explicitly modeling hierarchical structures?",
"We focus on recurrent neural network grammars (Dyer et al., 2016, RNNGs) , which jointly model the probability of phrase-structure trees and strings, p(x, y), through structurebuilding actions and explicit compositions for representing completed constituents.",
"Our choice of RNNGs is motivated by the findings of Kuncoro et al.",
"(2017) , who find evidence for syntactic headedness in RNNG phrasal representations.",
"Intuitively, the ability to learn heads is beneficial for this task, as the representation for the noun phrase \"The flowers in the vase\" would be similar to the syntactic head flowers rather than vase.",
"In some sense, the composition operator can be understood as injecting a structural recency bias into the model design, as subjects and verbs that are sequentially apart are encouraged to be close together in the RNNGs' representation.",
"Recurrent Neural Network Grammars RNNGs (Dyer et al., 2016) are language models that estimate the joint probability of string terminals and phrase-structure tree nonterminals.",
"Here we use stack-only RNNGs that achieve better perplexity and parsing performance (Kuncoro et al., 2017) .",
"Given the current stack configuration, the objective function of RNNGs is to predict the correct structure-building operation according to a top-down, left-to-right traversal of the phrasestructure tree; a partial traversal for the input sentence \"The flowers in the vase are blooming\" is illustrated in Fig.",
"3(a) .",
"7 The structural inductive bias of RNNGs derives from an explicit composition operator that represents completed constituents; for instance, the constituent (NP The flowers) is represented by a single composite element on the stack, rather than as four separate symbols.",
"During each REDUCE action, the topmost stack elements that belong to the new constituent are popped from the stack and then composed by the composition function; the composed symbol is then pushed back into the stack.",
"The model is trained in an end-to-end manner by minimizing the cross-entropy loss relative to a sample of gold trees.",
"Experiments Here we summarize the experimental settings of running RNNGs on the number agreement dataset and discuss the empirical findings.",
"Experimental settings.",
"We obtain phrasestructure trees for the Linzen et al.",
"(2016) dataset using a publicly available discriminative model 8 trained on the Penn Treebank (Marcus et al., 1993) .",
"At training time, we use these predicted trees to derive action sequences on the training set, and train the RNNG model on these sequences.",
"9 At test time, we compare the probabilities of the correct and incorrect verb forms given the prefix, which now includes both nonterminal and terminal symbols.",
"An example of the stack contents (i.e.",
"the prefix) when predicting the verb is provided in Fig.",
"3(a) .",
"We similarly run a grid search over the same hyper-parameter range as the sequential LSTM and compare the results with the strongest sequential LSTM baseline from §2.",
"Figure 2 : Number agreement error rates for sequential LSTM language models (left), sequential syntactic LSTM language models (Choe and Charniak, 2016, center), and RNNGs (right).",
"Discussion.",
"Fig.",
"2 shows that RNNGs (rightmost) achieve much better number agreement accuracy compared to LSTM language models (leftmost) for difficult cases with four and five attractors, with around 30% error rate reductions, along with a 13% error rate reduction (from 9% to 7.8%) for three attractors.",
"We attribute the slightly worse performance of RNNGs on cases with zero and one attractor to the presence of intervening structure-building actions that separate the subject and the verb, such as a REDUCE (step 6 in Fig.",
"3(a) ) action to complete the noun phrase and at least one action to predict a verb phrase (step 15 in Fig.",
"3(a) ) before the verb itself is introduced, while LSTM language models benefit from shorter dependencies for zero and one attractor cases.",
"The performance gain of RNNGs might arise from two potential causes.",
"First, RNNGs have access to predicted syntactic annotations, while LSTM language models operate solely on word sequences.",
"Second, RNNGs incorporate explicit compositions, which encourage hierarhical representations and potentially the discovery of syntactic (rather than sequential) dependencies.",
"Would LSTMs that have access to syntactic annotations, but without the explicit composition function, benefit from the same performance gain as RNNGs?",
"To answer this question, we run sequential LSTMs over the same phrase-structure trees (Choe and Charniak, 2016) , similarly estimating the joint probability of phrase-structure nonterminals and string terminals but without an explicit composition operator.",
"Taking the example in Fig.",
"3(a) , the sequential syntactic LSTM would have fifteen 10 symbols on the LSTM when predicting the verb, as opposed to three symbols in the case of RNNGs' stack LSTM.",
"In theory, the sequential LSTM over the phrase-structure trees (Choe and Charniak, 2016) may be able to incorporate a similar, albeit implicit, composition process as RNNGs and consequently derive similarly syntactic heads, although there is no inductive bias that explicitly encourages such process.",
"Fig.",
"2 suggests that the sequential syntactic LSTMs (center) perform comparably with sequential LSTMs without syntax for multiple attractor cases, and worse than RNNGs for nearly all attractors; the gap is highest for multiple attractors.",
"This result showcases the importance of an explicit composition operator and hierarchical representations in identifying structural dependencies, as indicated by number agreement accuracy.",
"Our finding is consistent with the recent work of Yogatama et al.",
"(2018) , who find that introducing elements of hierarchical modeling through a stackstructured memory is beneficial for number agreement, outperforming LSTM language models and attention-augmented variants by increasing margins as the number of attractor grows.",
"Further Analysis In order to better interpret the results, we conduct further analysis into the perplexities of each model, followed by a discussion on the effect of incrementality constraints on the RNNG when predicting number agreement.",
"Perplexity.",
"To what extent does the success of RNNGs in the number agreement task with multiple attractors correlate with better performance under the perplexity metric?",
"We answer this question by using an importance sampling marginalization procedure (Dyer et al., 2016) to obtain an estimate of p(x) under both RNNGs and the sequential syntactic LSTM model.",
"Following Dyer et al.",
"(2016) , for each sentence on the validation set we sample 100 candidate trees from a discriminative model 11 as our proposal distribution.",
"As demonstrated in Table 3 , the LSTM language model has the lowest validation set perplexity despite substantially worse performance than RN-NGs in number agreement with multiple attractors, suggesting that lower perplexity is not neces-sarily correlated with number agreement success.",
"Incrementality constraints.",
"As the syntactic prefix was derived from a discriminative model that has access to unprocessed words, one potential concern is that this prefix might violate the incrementality constraints and benefit the RNNG over the LSTM language model.",
"To address this concern, we remark that the empirical evidence from Fig.",
"2 and Table 3 indicates that the LSTM language model without syntactic annotation outperforms the sequential LSTM with syntactic annotation in terms of both perplexity and number agreement throughout nearly all attractor settings, suggesting that the predicted syntactic prefix does not give any unfair advantage to the syntactic models.",
"Furthermore, we run an experiment where the syntactic prefix is instead derived from an incremental beam search procedure of Fried et al.",
"(2017) .",
"12 To this end, we take the highest scoring beam entry at the time that the verb is generated to be the syntactic prefix; this procedure is applied to both the correct and incorrect verb forms.",
"13 We then similarly compare the probabilities of the correct and incorrect verb form given each respective syntactic prefix to obtain number agreement accuracy.",
"Our finding suggests that using the fully incremental tree prefix leads to even better RNNG number agreement performance for four and five attractors, achieving 7.1% and 8.2% error rates, respectively, compared to 9.4% and 12% for the RNNG error rates in Fig.",
"2 .",
"Top-Down, Left-Corner, and Bottom-Up Traversals In this section, we propose two new variants of RNNGs that construct trees using a different con- 12 As the beam search procedure is time-consuming, we randomly sample 500 cases for each attractor and compute the number agreement accuracy on these samples.",
"13 Consequently, the correct and incorrect forms of the sentence might have different partial trees, as the highest scoring beam entries may be different for each alternative.",
"struction order than the top-down, left-to-right order used above.",
"These are a bottom-up construction order ( §4.1) and a left-corner construction order ( §4.2), analogous to the well-known parsing strategies (e.g.",
"Hale, 2014, chapter 3).",
"They differ from these classic strategies insofar as they do not announce the phrase-structural content of an entire branch at the same time, adopting instead a node-by-node enumeration reminescent of Markov Grammars (Charniak, 1997) .",
"This stepby-step arrangement extends to the derived string as well; since all variants generate words from left to right, the models can be compared using number agreement as a diagnostic.",
"14 Here we state our hypothesis on why the build order matters.",
"The three generation strategies represent different chain rule decompositions of the joint probability of strings and phrase-structure trees, thereby imposing different biases on the learner.",
"Earlier work in parsing has characterized the plausibility of top-down, left-corner, and bottom-up strategies as viable candidates of human sentence processing, especially in terms of memory constraints and human difficulties with center embedding constructions (Johnson-Laird, 1983; Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992 , inter alia), along with neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we cast the three strategies as models of language generation (Manning and Carpenter, 1997), and focus on the empirical question: which generation order has the most appropriate bias in modeling non-local structural dependencies in English?",
"These alternative orders organize the learning problem so as to yield intermediate states in generation that condition on different aspects of the grammatical structure.",
"In number agreement, this amounts to making an agreement controller, such as the word flowers in Fig.",
"3 , more or less salient.",
"If it is more salient, the model should be better-able to inflect the main verb in agreement with this controller, without getting distracted by the attractors.",
"The three proposed build orders are compared in Fig.",
"3 , showing the respective configurations (i.e.",
"the prefix) when generating the main verb in a sentence with a single attractor.",
"15 In ad-dition, we show concrete action sequences for a simpler sentence in each section.",
"Bottom-Up Traversal In bottom-up traversals, phrases are recursively constructed and labeled with the nonterminal type once all their daughters have been built, as illustrated in Fig.",
"4 .",
"Bottom-up traversals benefit from shorter stack depths compared to top-down due to the lack of incomplete nonterminals.",
"As the commitment to label the nonterminal type of a phrase is delayed until its constituents are complete, this means that the generation of a child node cannot condition on the label of its parent node.",
"In n-ary branching trees, bottom-up completion of constituents requires a procedure for determining how many of the most recent elements on the stack should be daughters of the node that is being constructed.",
"16 Conceptually, rather than having a single REDUCE operation as we have before, we have a complex REDUCE(X, n) operation that must determine the type of the constituent (i.e., X) as well as the number of daughters (i.e., n).",
"In step 5 of Fig.",
"4 , the newly formed NP constituent only covers the terminal worms, and neither the unattached terminal eats nor the constituent (NP The fox) is part of the new noun phrase.",
"We implement this extent decision using a stick-breaking construction-using the stack LSTM encoding, a single-layer feedforward network, and a logistic output layer-which decides whether the top element on the stack should be the leftmost child of the new constituent (i.e.",
"whether or not the new constituent is complete after popping the current topmost stack element), as illustrated in Fig.",
"5 .",
"If not, the process is then repeated after the topmost stack element is popped.",
"Once the extent of the new nonterminal has been decided, we parameterize the decision of the nonterminal label type; in Fig.",
"5 this is an NP.",
"A second difference to top-down generation is that when a single constituent remains on the stack, the sentence is not necessarily complete (see step 3 of Fig.",
"4 for examples where this happens).",
"We thus introduce an explicit STOP action (step 8, Fig.",
"4) , indicating the tree is complete, which is only assigned non-zero probability when the stack has a during the history of the full generation process vary considerably in the invariances and the kinds of actions they predict.",
"16 This mechanism is not necessary with strictly binary branching trees, since each new nonterminal always consists of the two children at the top of the stack.",
"Left-Corner Traversal Left-corner traversals combine some aspects of top-down and bottom-up processing.",
"As illustrated in Fig.",
"6 , this works by first generating the leftmost terminal of the tree, The (step 0), before proceeding bottom-up to predict its parent NP (step 1) and then top-down to predict the rest of its children (step 2).",
"A REDUCE action similarly calls the composition operator once the phrase is complete (e.g.",
"step 3).",
"The complete constituent (NP The fox) is the leftmost child of its parent node, thus an NT SW(S) action is done next (step 4).",
"The NT SW(X) action is similar to the NT(X) from the top-down generator, in that it introduces an open nonterminal node and must be matched later by a corresponding REDUCE operation, but, in addition, swaps the two topmost elements at the top of the stack.",
"This is necessary because the parent nonterminal node is not built until after its left-most child has been constructed.",
"In step 1 of Fig.",
"6 , with a single element The on the stack, the action NT SW(NP) adds the open nonterminal symbol NP to become the topmost stack element, but after applying the swap operator the stack now contains (NP | The (step 2).",
"Experiments We optimize the hyper-parameters of each RNNG variant using grid searches based on validation set perplexity.",
"Table 4 summarizes average stack depths and perplexities 17 on the Linzen et al.",
"(2016) validation set.",
"We evaluate each of the variants in terms of number agreement accuracy as an evidence of its suitability to model structural dependencies in English, presented in Table 5 .",
"To account for randomness in training, we report the error rate summary statistics of ten different runs.",
"Avg.",
"(±sdev)/min/max n=2 n=3 n=4 LM 5.8(±0.2)/5.5/6.0 9.6(±0.7)/8.8/10.1 14.1(±1.2)/13.0/15.3 TD 5.5(±0.4)/4.9/5.8 7.8(±0.6)/7.4/8.0 8.9(±1.1)/7.9/9.8 LC 5.4(±0.3)/5.2/5.5 8.2(±0.4)/7.9/8.7 9.9(±1.3)/8.8/11.5 BU 5.7(±0.3) 5.5/5.8 8.5(±0.7)/8.0/9.3 9.7(±1.1)/9.0/11.3 Table 5 : Number agreement error rates for topdown (TD), left-corner (LC), and bottom-up (BU) RNNGs, broken down by the number of attractors.",
"LM indicates the best sequential language model baseline ( §2).",
"We report the mean, standard deviation, and minimum/maximum of 10 different random seeds of each model.",
"Discussion.",
"In Table 5 , we focus on empirical results for cases where the structural dependencies matter the most, corresponding to cases with two, three, and four attractors.",
"All three RNNG variants outperform the sequential LSTM language model baseline for these cases.",
"Nevertheless, the top-down variant outperforms both left-corner and bottom-up strategies for difficult cases with three or more attractors, suggesting that the top-down strategy is most appropriately biased to model difficult number agreement dependencies in English.",
"We run an approximate randomization test by stratifying the output and permuting within each stratum (Yeh, 2000) and find that, for four attractors, the performance difference between the top-down RNNG and the other variants is statistically significant at p < 0.05.",
"The success of the top-down traversal in the domain of number-agreement prediction is consistent with a classical view in parsing that argues top-down parsing is the most human-like parsing strategy since it is the most anticipatory.",
"Only anticipatory representations, it is said, could explain the rapid, incremental processing that humans seem to exhibit (Marslen-Wilson, 1973; Tanenhaus et al., 1995) ; this line of thinking similarly motivates Charniak (2010) , among others.",
"While most work in this domain has been concerned with the parsing problem, our findings suggest that anticipatory mechanisms are also beneficial in capturing structural dependencies in language modeling.",
"We note that our results are achieved using models that, in theory, are able to condition on the entire derivation history, while earlier work in sentence processing has focused on cognitive memory considerations, such as the memory-bounded model of Schuler et al.",
"(2010) .",
"Conclusion Given enough capacity, LSTMs trained on language modeling objectives are able to learn syntax-sensitive dependencies, as evidenced by accurate number agreement accuracy with multiple attractors.",
"Despite this strong performance, we discover explicit modeling of structure does improve the model's ability to discover non-local structural dependencies when determining the distribution over subsequent word generation.",
"Recurrent neural network grammars (RNNGs), which jointly model phrase-structure trees and strings and employ an explicit composition operator, substantially outperform LSTM language models and syntactic language models without explicit compositions; this highlights the importance of a hierarchical inductive bias in capturing structural dependencies.",
"We explore the possibility that how the structure is built affects number agreement performance.",
"Through novel extensions to RNNGs that enable the use of left-corner and bottom-up generation strategies, we discover that this is indeed the case: the three RNNG variants have different generalization properties for number agreement, with the top-down traversal strategy performing best for cases with multiple attractors."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"4.3",
"5"
],
"paper_header_content": [
"Introduction",
"Number Agreement with LSTM Language Models",
"Number Agreement with RNNGs",
"Recurrent Neural Network Grammars",
"Experiments",
"Further Analysis",
"Top-Down, Left-Corner, and Bottom-Up Traversals",
"Bottom-Up Traversal",
"Left-Corner Traversal",
"Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-30#paper-1041#slide-15 | Experimental Settings | All models are trained, validated, and tested on the same dataset.
On the training split, the syntactic models are trained using predicted phrase-structure trees from the Stanford parser.
At test time, we run the incremental beam search (Stern et al., 2017) procedure up to the main verb for both verb forms, and take the highest-scoring tree.
The most probable tree might potentially be different for the correct/incorrect verbs
LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | All models are trained, validated, and tested on the same dataset.
On the training split, the syntactic models are trained using predicted phrase-structure trees from the Stanford parser.
At test time, we run the incremental beam search (Stern et al., 2017) procedure up to the main verb for both verb forms, and take the highest-scoring tree.
The most probable tree might potentially be different for the correct/incorrect verbs
LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | [] |
GEM-SciDuet-train-30#paper-1041#slide-16 | 1041 | LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modeling Structure Makes Them Better | Language exhibits hierarchical structure, but recent work using a subject-verb agreement diagnostic argued that state-ofthe-art language models, LSTMs, fail to learn long-range syntax-sensitive dependencies. Using the same diagnostic, we show that, in fact, LSTMs do succeed in learning such dependencies-provided they have enough capacity. We then explore whether models that have access to explicit syntactic information learn agreement more effectively, and how the way in which this structural information is incorporated into the model impacts performance. We find that the mere presence of syntactic information does not improve accuracy, but when model architecture is determined by syntax, number agreement is improved. Further, we find that the choice of how syntactic structure is built affects how well number agreement is learned: top-down construction outperforms leftcorner and bottom-up variants in capturing long-distance structural dependencies. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189
],
"paper_content_text": [
"Introduction Recurrent neural networks (RNNs) are remarkably effective models of sequential data.",
"Recent years have witnessed the widespread adoption of recurrent architectures such as LSTMs (Hochreiter and Schmidhuber, 1997) in various NLP tasks, with state of the art results in language modeling and conditional generation tasks like machine translation (Bahdanau et al., 2015) and text summarization (See et al., 2017) .",
"Here we revisit the question asked by Linzen et al.",
"(2016) : as RNNs model word sequences without explicit notions of hierarchical structure, Figure 1 : An example of the number agreement task with two attractors and a subject-verb distance of five.",
"to what extent are these models able to learn non-local syntactic dependencies in natural language?",
"Identifying number agreement between subjects and verbs-especially in the presence of attractors-can be understood as a cognitivelymotivated probe that seeks to distinguish hierarchical theories from sequential ones, as models that rely on sequential cues like the most recent noun would favor the incorrect verb form.",
"We provide an example of this task in Fig.",
"1 , where the plural form of the verb have agrees with the distant subject parts, rather than the adjacent attractors (underlined) of the singular form.",
"Contrary to the findings of Linzen et al.",
"(2016) , our experiments suggest that sequential LSTMs are able to capture structural dependencies to a large extent, even for cases with multiple attractors ( §2).",
"Our finding suggests that network capacity plays a crucial role in capturing structural dependencies with multiple attractors.",
"Nevertheless, we find that a strong character LSTM language model-which lacks explicit word representation and has to capture much longer sequential dependencies in order to learn non-local structural dependencies effectively-performs much worse in the number agreement task.",
"Given the strong performance of word-based LSTM language models, are there are any substantial benefits, in terms of number agreement accuracy, to explicitly modeling hierarchical structures as an inductive bias?",
"We discover that a certain class of LSTM language models that explicitly models syntactic structures, the recurrent neural network grammars (Dyer et al., 2016, RN-NGs) , considerably outperforms sequential LSTM language models for cases with multiple attractors ( §3).",
"We present experiments affirming that this gain is due to an explicit composition operator rather than the presence of predicted syntactic annotations.",
"Rather surprisingly, syntactic LSTM language models without explicit composition have no advantage over sequential LSTMs that operate on word sequences, although these models can nevertheless be excellent predictors of phrase structures (Choe and Charniak, 2016) .",
"Having established the importance of modeling structures, we explore the hypothesis that how we build the structure affects the model's ability to identify structural dependencies in English.",
"As RNNGs build phrase-structure trees through top-down operations, we propose extensions to the structure-building sequences and model architecture that enable left-corner (Henderson, 2003 (Henderson, , 2004 and bottom-up (Chelba and Jelinek, 2000; Emami and Jelinek, 2005) generation orders ( §4).",
"Extensive prior work has characterized topdown, left-corner, and bottom-up parsing strategies in terms of cognitive plausibility (Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992) and neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we move away from the realm of parsing and evaluate the three strategies as models of generation instead, and address the following empirical question: which generation order is most appropriately biased to model structural dependencies in English, as indicated by number agreement accuracy?",
"Our key finding is that the top-down generation outperforms left-corner and bottom-up variants for difficult cases with multiple attractors.",
"In theory, the three traversal strategies approximate the same chain rule that decompose the joint probability of words and phrase-structure trees, denoted as p(x, y), differently and as such will impose different biases on the learner.",
"In §4.3, we show that the three variants achieve similar perplexities on a held-out validation set.",
"As we observe different patterns in number agreement, this demonstrates that while perplexity can be a useful diagnostic tool, it may not be sensitive enough for comparing models in terms of how well they capture grammatical intuitions.",
"Number Agreement with LSTM Language Models We revisit the number agreement task with LSTMs trained on language modeling objectives, as proposed by Linzen et al.",
"(2016) .",
"Experimental Settings.",
"We use the same parsed Wikipedia corpus, verb inflectors, preprocessing steps, and dataset split as Linzen et al.",
"(2016) .",
"1 Word types beyond the most frequent 10,000 are converted to their respective POS tags.",
"We summarize the corpus statistics of the dataset, along with the test set distribution of the number of attractors, in Table 1 .",
"Similar to Linzen et al.",
"(2016) , we only include test cases where all intervening nouns are of the opposite number forms than the subject noun.",
"All models are implemented using the DyNet library (Neubig et al., 2017) .",
"Training was done using a language modeling objective that predicts the next word given the prefix; at test time we compute agreement error rates by comparing the probability of the correct verb form with the incorrect one.",
"We report performance of a few different LSTM hidden layer configurations, while other hyper-parameters are selected based on a grid search.",
"2 Following Linzen Table 2 indicates that, given enough capacity, LSTM language models without explicit syntactic supervision are able to perform well in number agreement.",
"For cases with multiple attractors, we observe that the LSTM language model with 50 hidden units trails behind its larger counterparts by a substantial margin despite comparable performance for zero attractor cases, suggesting that network capacity plays an especially important role in propagating relevant structural information across a large number of steps.",
"5 Our experiment independently derives the same finding as the recent work of Gulordava et al.",
"(2018) , who also find that LSTMs trained with language modeling objectives are able to learn number agreement well; here we additionally identify model capacity as one of the reasons for the discrepancy with the Linzen et al.",
"(2016) results.",
"While the pretrained large-scale language model of Jozefowicz et al.",
"(2016) has certain advantages in terms of model capacity, more training data, and richer vocabulary, we suspect that the poorer performance is due to differences between their training domain and the number agreement testing domain, although the model still performs reasonably well in the number agreement test set.",
"Prior work has confirmed the notion that, in many cases, statistical models are able to achieve good performance under some aggregate metric by overfitting to patterns that are predictive in most cases, often at the expense of more difficult, infrequent instances that require deeper language understanding abilities (Rimell et al., 2009; Jia and Liang, 2017) .",
"In the vast majority of cases, structural dependencies between subjects and verbs highly overlap with sequential dependencies (Table 1).",
"Nevertheless, the fact that number agreement accuracy gets worse as the number of attractors increases is consistent with a sequential recency bias in LSTMs: under this conjecture, identifying the correct structural dependency becomes harder when there are more adjacent nouns of different number forms than the true subject.",
"If the sequential recency conjecture is correct, then LSTMs would perform worse when the structural dependency is more distant in the sequences, compared to cases where the structural dependency is more adjacent.",
"We empirically test this conjecture by running a strong character-based LSTM language model of that achieved state of the art results on EnWiki8 from the Hutter Prize dataset (Hutter, 2012) , with 1,800 hidden units and 10 million parameters.",
"The character LSTM is trained, validated, and tested 6 on the same split of the Linzen et al.",
"(2016) number agreement dataset.",
"A priori, we expect that number agreement is harder for character LSTMs for two reasons.",
"First, character LSTMs lack explicit word representaforms much better for cases with multiple attractors.",
"tions, thus succeeding in this task requires identifying structural dependencies between two sequences of character tokens, while word-based LSTMs only need to resolve dependencies between word tokens.",
"Second, by nature of modeling characters, non-local structural dependencies are sequentially further apart than in the wordbased language model.",
"On the other hand, character LSTMs have the ability to exploit and share informative morphological cues, such as the fact that plural nouns in English tend to end with 's'.",
"As demonstrated on the last row of Table 2 , we find that the character LSTM language model performs much worse at number agreement with multiple attractors compared to its word-based counterparts.",
"This finding is consistent with that of Sennrich (2017) , who find that character-level decoders in neural machine translation perform worse than subword models in capturing morphosyntactic agreement.",
"To some extent, our finding demonstrates the limitations that character LSTMs face in learning structure from language modeling objectives, despite earlier evidence that character LSTM language models are able to implicitly acquire a lexicon (Le Godais et al., 2017) .",
"Number Agreement with RNNGs Given the strong performance of sequential LSTMs in number agreement, is there any further benefit to explicitly modeling hierarchical structures?",
"We focus on recurrent neural network grammars (Dyer et al., 2016, RNNGs) , which jointly model the probability of phrase-structure trees and strings, p(x, y), through structurebuilding actions and explicit compositions for representing completed constituents.",
"Our choice of RNNGs is motivated by the findings of Kuncoro et al.",
"(2017) , who find evidence for syntactic headedness in RNNG phrasal representations.",
"Intuitively, the ability to learn heads is beneficial for this task, as the representation for the noun phrase \"The flowers in the vase\" would be similar to the syntactic head flowers rather than vase.",
"In some sense, the composition operator can be understood as injecting a structural recency bias into the model design, as subjects and verbs that are sequentially apart are encouraged to be close together in the RNNGs' representation.",
"Recurrent Neural Network Grammars RNNGs (Dyer et al., 2016) are language models that estimate the joint probability of string terminals and phrase-structure tree nonterminals.",
"Here we use stack-only RNNGs that achieve better perplexity and parsing performance (Kuncoro et al., 2017) .",
"Given the current stack configuration, the objective function of RNNGs is to predict the correct structure-building operation according to a top-down, left-to-right traversal of the phrasestructure tree; a partial traversal for the input sentence \"The flowers in the vase are blooming\" is illustrated in Fig.",
"3(a) .",
"7 The structural inductive bias of RNNGs derives from an explicit composition operator that represents completed constituents; for instance, the constituent (NP The flowers) is represented by a single composite element on the stack, rather than as four separate symbols.",
"During each REDUCE action, the topmost stack elements that belong to the new constituent are popped from the stack and then composed by the composition function; the composed symbol is then pushed back into the stack.",
"The model is trained in an end-to-end manner by minimizing the cross-entropy loss relative to a sample of gold trees.",
"Experiments Here we summarize the experimental settings of running RNNGs on the number agreement dataset and discuss the empirical findings.",
"Experimental settings.",
"We obtain phrasestructure trees for the Linzen et al.",
"(2016) dataset using a publicly available discriminative model 8 trained on the Penn Treebank (Marcus et al., 1993) .",
"At training time, we use these predicted trees to derive action sequences on the training set, and train the RNNG model on these sequences.",
"9 At test time, we compare the probabilities of the correct and incorrect verb forms given the prefix, which now includes both nonterminal and terminal symbols.",
"An example of the stack contents (i.e.",
"the prefix) when predicting the verb is provided in Fig.",
"3(a) .",
"We similarly run a grid search over the same hyper-parameter range as the sequential LSTM and compare the results with the strongest sequential LSTM baseline from §2.",
"Figure 2 : Number agreement error rates for sequential LSTM language models (left), sequential syntactic LSTM language models (Choe and Charniak, 2016, center), and RNNGs (right).",
"Discussion.",
"Fig.",
"2 shows that RNNGs (rightmost) achieve much better number agreement accuracy compared to LSTM language models (leftmost) for difficult cases with four and five attractors, with around 30% error rate reductions, along with a 13% error rate reduction (from 9% to 7.8%) for three attractors.",
"We attribute the slightly worse performance of RNNGs on cases with zero and one attractor to the presence of intervening structure-building actions that separate the subject and the verb, such as a REDUCE (step 6 in Fig.",
"3(a) ) action to complete the noun phrase and at least one action to predict a verb phrase (step 15 in Fig.",
"3(a) ) before the verb itself is introduced, while LSTM language models benefit from shorter dependencies for zero and one attractor cases.",
"The performance gain of RNNGs might arise from two potential causes.",
"First, RNNGs have access to predicted syntactic annotations, while LSTM language models operate solely on word sequences.",
"Second, RNNGs incorporate explicit compositions, which encourage hierarhical representations and potentially the discovery of syntactic (rather than sequential) dependencies.",
"Would LSTMs that have access to syntactic annotations, but without the explicit composition function, benefit from the same performance gain as RNNGs?",
"To answer this question, we run sequential LSTMs over the same phrase-structure trees (Choe and Charniak, 2016) , similarly estimating the joint probability of phrase-structure nonterminals and string terminals but without an explicit composition operator.",
"Taking the example in Fig.",
"3(a) , the sequential syntactic LSTM would have fifteen 10 symbols on the LSTM when predicting the verb, as opposed to three symbols in the case of RNNGs' stack LSTM.",
"In theory, the sequential LSTM over the phrase-structure trees (Choe and Charniak, 2016) may be able to incorporate a similar, albeit implicit, composition process as RNNGs and consequently derive similarly syntactic heads, although there is no inductive bias that explicitly encourages such process.",
"Fig.",
"2 suggests that the sequential syntactic LSTMs (center) perform comparably with sequential LSTMs without syntax for multiple attractor cases, and worse than RNNGs for nearly all attractors; the gap is highest for multiple attractors.",
"This result showcases the importance of an explicit composition operator and hierarchical representations in identifying structural dependencies, as indicated by number agreement accuracy.",
"Our finding is consistent with the recent work of Yogatama et al.",
"(2018) , who find that introducing elements of hierarchical modeling through a stackstructured memory is beneficial for number agreement, outperforming LSTM language models and attention-augmented variants by increasing margins as the number of attractor grows.",
"Further Analysis In order to better interpret the results, we conduct further analysis into the perplexities of each model, followed by a discussion on the effect of incrementality constraints on the RNNG when predicting number agreement.",
"Perplexity.",
"To what extent does the success of RNNGs in the number agreement task with multiple attractors correlate with better performance under the perplexity metric?",
"We answer this question by using an importance sampling marginalization procedure (Dyer et al., 2016) to obtain an estimate of p(x) under both RNNGs and the sequential syntactic LSTM model.",
"Following Dyer et al.",
"(2016) , for each sentence on the validation set we sample 100 candidate trees from a discriminative model 11 as our proposal distribution.",
"As demonstrated in Table 3 , the LSTM language model has the lowest validation set perplexity despite substantially worse performance than RN-NGs in number agreement with multiple attractors, suggesting that lower perplexity is not neces-sarily correlated with number agreement success.",
"Incrementality constraints.",
"As the syntactic prefix was derived from a discriminative model that has access to unprocessed words, one potential concern is that this prefix might violate the incrementality constraints and benefit the RNNG over the LSTM language model.",
"To address this concern, we remark that the empirical evidence from Fig.",
"2 and Table 3 indicates that the LSTM language model without syntactic annotation outperforms the sequential LSTM with syntactic annotation in terms of both perplexity and number agreement throughout nearly all attractor settings, suggesting that the predicted syntactic prefix does not give any unfair advantage to the syntactic models.",
"Furthermore, we run an experiment where the syntactic prefix is instead derived from an incremental beam search procedure of Fried et al.",
"(2017) .",
"12 To this end, we take the highest scoring beam entry at the time that the verb is generated to be the syntactic prefix; this procedure is applied to both the correct and incorrect verb forms.",
"13 We then similarly compare the probabilities of the correct and incorrect verb form given each respective syntactic prefix to obtain number agreement accuracy.",
"Our finding suggests that using the fully incremental tree prefix leads to even better RNNG number agreement performance for four and five attractors, achieving 7.1% and 8.2% error rates, respectively, compared to 9.4% and 12% for the RNNG error rates in Fig.",
"2 .",
"Top-Down, Left-Corner, and Bottom-Up Traversals In this section, we propose two new variants of RNNGs that construct trees using a different con- 12 As the beam search procedure is time-consuming, we randomly sample 500 cases for each attractor and compute the number agreement accuracy on these samples.",
"13 Consequently, the correct and incorrect forms of the sentence might have different partial trees, as the highest scoring beam entries may be different for each alternative.",
"struction order than the top-down, left-to-right order used above.",
"These are a bottom-up construction order ( §4.1) and a left-corner construction order ( §4.2), analogous to the well-known parsing strategies (e.g.",
"Hale, 2014, chapter 3).",
"They differ from these classic strategies insofar as they do not announce the phrase-structural content of an entire branch at the same time, adopting instead a node-by-node enumeration reminescent of Markov Grammars (Charniak, 1997) .",
"This stepby-step arrangement extends to the derived string as well; since all variants generate words from left to right, the models can be compared using number agreement as a diagnostic.",
"14 Here we state our hypothesis on why the build order matters.",
"The three generation strategies represent different chain rule decompositions of the joint probability of strings and phrase-structure trees, thereby imposing different biases on the learner.",
"Earlier work in parsing has characterized the plausibility of top-down, left-corner, and bottom-up strategies as viable candidates of human sentence processing, especially in terms of memory constraints and human difficulties with center embedding constructions (Johnson-Laird, 1983; Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992 , inter alia), along with neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we cast the three strategies as models of language generation (Manning and Carpenter, 1997), and focus on the empirical question: which generation order has the most appropriate bias in modeling non-local structural dependencies in English?",
"These alternative orders organize the learning problem so as to yield intermediate states in generation that condition on different aspects of the grammatical structure.",
"In number agreement, this amounts to making an agreement controller, such as the word flowers in Fig.",
"3 , more or less salient.",
"If it is more salient, the model should be better-able to inflect the main verb in agreement with this controller, without getting distracted by the attractors.",
"The three proposed build orders are compared in Fig.",
"3 , showing the respective configurations (i.e.",
"the prefix) when generating the main verb in a sentence with a single attractor.",
"15 In ad-dition, we show concrete action sequences for a simpler sentence in each section.",
"Bottom-Up Traversal In bottom-up traversals, phrases are recursively constructed and labeled with the nonterminal type once all their daughters have been built, as illustrated in Fig.",
"4 .",
"Bottom-up traversals benefit from shorter stack depths compared to top-down due to the lack of incomplete nonterminals.",
"As the commitment to label the nonterminal type of a phrase is delayed until its constituents are complete, this means that the generation of a child node cannot condition on the label of its parent node.",
"In n-ary branching trees, bottom-up completion of constituents requires a procedure for determining how many of the most recent elements on the stack should be daughters of the node that is being constructed.",
"16 Conceptually, rather than having a single REDUCE operation as we have before, we have a complex REDUCE(X, n) operation that must determine the type of the constituent (i.e., X) as well as the number of daughters (i.e., n).",
"In step 5 of Fig.",
"4 , the newly formed NP constituent only covers the terminal worms, and neither the unattached terminal eats nor the constituent (NP The fox) is part of the new noun phrase.",
"We implement this extent decision using a stick-breaking construction-using the stack LSTM encoding, a single-layer feedforward network, and a logistic output layer-which decides whether the top element on the stack should be the leftmost child of the new constituent (i.e.",
"whether or not the new constituent is complete after popping the current topmost stack element), as illustrated in Fig.",
"5 .",
"If not, the process is then repeated after the topmost stack element is popped.",
"Once the extent of the new nonterminal has been decided, we parameterize the decision of the nonterminal label type; in Fig.",
"5 this is an NP.",
"A second difference to top-down generation is that when a single constituent remains on the stack, the sentence is not necessarily complete (see step 3 of Fig.",
"4 for examples where this happens).",
"We thus introduce an explicit STOP action (step 8, Fig.",
"4) , indicating the tree is complete, which is only assigned non-zero probability when the stack has a during the history of the full generation process vary considerably in the invariances and the kinds of actions they predict.",
"16 This mechanism is not necessary with strictly binary branching trees, since each new nonterminal always consists of the two children at the top of the stack.",
"Left-Corner Traversal Left-corner traversals combine some aspects of top-down and bottom-up processing.",
"As illustrated in Fig.",
"6 , this works by first generating the leftmost terminal of the tree, The (step 0), before proceeding bottom-up to predict its parent NP (step 1) and then top-down to predict the rest of its children (step 2).",
"A REDUCE action similarly calls the composition operator once the phrase is complete (e.g.",
"step 3).",
"The complete constituent (NP The fox) is the leftmost child of its parent node, thus an NT SW(S) action is done next (step 4).",
"The NT SW(X) action is similar to the NT(X) from the top-down generator, in that it introduces an open nonterminal node and must be matched later by a corresponding REDUCE operation, but, in addition, swaps the two topmost elements at the top of the stack.",
"This is necessary because the parent nonterminal node is not built until after its left-most child has been constructed.",
"In step 1 of Fig.",
"6 , with a single element The on the stack, the action NT SW(NP) adds the open nonterminal symbol NP to become the topmost stack element, but after applying the swap operator the stack now contains (NP | The (step 2).",
"Experiments We optimize the hyper-parameters of each RNNG variant using grid searches based on validation set perplexity.",
"Table 4 summarizes average stack depths and perplexities 17 on the Linzen et al.",
"(2016) validation set.",
"We evaluate each of the variants in terms of number agreement accuracy as an evidence of its suitability to model structural dependencies in English, presented in Table 5 .",
"To account for randomness in training, we report the error rate summary statistics of ten different runs.",
"Avg.",
"(±sdev)/min/max n=2 n=3 n=4 LM 5.8(±0.2)/5.5/6.0 9.6(±0.7)/8.8/10.1 14.1(±1.2)/13.0/15.3 TD 5.5(±0.4)/4.9/5.8 7.8(±0.6)/7.4/8.0 8.9(±1.1)/7.9/9.8 LC 5.4(±0.3)/5.2/5.5 8.2(±0.4)/7.9/8.7 9.9(±1.3)/8.8/11.5 BU 5.7(±0.3) 5.5/5.8 8.5(±0.7)/8.0/9.3 9.7(±1.1)/9.0/11.3 Table 5 : Number agreement error rates for topdown (TD), left-corner (LC), and bottom-up (BU) RNNGs, broken down by the number of attractors.",
"LM indicates the best sequential language model baseline ( §2).",
"We report the mean, standard deviation, and minimum/maximum of 10 different random seeds of each model.",
"Discussion.",
"In Table 5 , we focus on empirical results for cases where the structural dependencies matter the most, corresponding to cases with two, three, and four attractors.",
"All three RNNG variants outperform the sequential LSTM language model baseline for these cases.",
"Nevertheless, the top-down variant outperforms both left-corner and bottom-up strategies for difficult cases with three or more attractors, suggesting that the top-down strategy is most appropriately biased to model difficult number agreement dependencies in English.",
"We run an approximate randomization test by stratifying the output and permuting within each stratum (Yeh, 2000) and find that, for four attractors, the performance difference between the top-down RNNG and the other variants is statistically significant at p < 0.05.",
"The success of the top-down traversal in the domain of number-agreement prediction is consistent with a classical view in parsing that argues top-down parsing is the most human-like parsing strategy since it is the most anticipatory.",
"Only anticipatory representations, it is said, could explain the rapid, incremental processing that humans seem to exhibit (Marslen-Wilson, 1973; Tanenhaus et al., 1995) ; this line of thinking similarly motivates Charniak (2010) , among others.",
"While most work in this domain has been concerned with the parsing problem, our findings suggest that anticipatory mechanisms are also beneficial in capturing structural dependencies in language modeling.",
"We note that our results are achieved using models that, in theory, are able to condition on the entire derivation history, while earlier work in sentence processing has focused on cognitive memory considerations, such as the memory-bounded model of Schuler et al.",
"(2010) .",
"Conclusion Given enough capacity, LSTMs trained on language modeling objectives are able to learn syntax-sensitive dependencies, as evidenced by accurate number agreement accuracy with multiple attractors.",
"Despite this strong performance, we discover explicit modeling of structure does improve the model's ability to discover non-local structural dependencies when determining the distribution over subsequent word generation.",
"Recurrent neural network grammars (RNNGs), which jointly model phrase-structure trees and strings and employ an explicit composition operator, substantially outperform LSTM language models and syntactic language models without explicit compositions; this highlights the importance of a hierarchical inductive bias in capturing structural dependencies.",
"We explore the possibility that how the structure is built affects number agreement performance.",
"Through novel extensions to RNNGs that enable the use of left-corner and bottom-up generation strategies, we discover that this is indeed the case: the three RNNG variants have different generalization properties for number agreement, with the top-down traversal strategy performing best for cases with multiple attractors."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"4.3",
"5"
],
"paper_header_content": [
"Introduction",
"Number Agreement with LSTM Language Models",
"Number Agreement with RNNGs",
"Recurrent Neural Network Grammars",
"Experiments",
"Further Analysis",
"Top-Down, Left-Corner, and Bottom-Up Traversals",
"Bottom-Up Traversal",
"Left-Corner Traversal",
"Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-30#paper-1041#slide-16 | Experimental Findings | error rate reductions for n=4 and
Performance differences are significant (p <
Lo wer is better
LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | error rate reductions for n=4 and
Performance differences are significant (p <
Lo wer is better
LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | [] |
GEM-SciDuet-train-30#paper-1041#slide-17 | 1041 | LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modeling Structure Makes Them Better | Language exhibits hierarchical structure, but recent work using a subject-verb agreement diagnostic argued that state-ofthe-art language models, LSTMs, fail to learn long-range syntax-sensitive dependencies. Using the same diagnostic, we show that, in fact, LSTMs do succeed in learning such dependencies-provided they have enough capacity. We then explore whether models that have access to explicit syntactic information learn agreement more effectively, and how the way in which this structural information is incorporated into the model impacts performance. We find that the mere presence of syntactic information does not improve accuracy, but when model architecture is determined by syntax, number agreement is improved. Further, we find that the choice of how syntactic structure is built affects how well number agreement is learned: top-down construction outperforms leftcorner and bottom-up variants in capturing long-distance structural dependencies. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189
],
"paper_content_text": [
"Introduction Recurrent neural networks (RNNs) are remarkably effective models of sequential data.",
"Recent years have witnessed the widespread adoption of recurrent architectures such as LSTMs (Hochreiter and Schmidhuber, 1997) in various NLP tasks, with state of the art results in language modeling and conditional generation tasks like machine translation (Bahdanau et al., 2015) and text summarization (See et al., 2017) .",
"Here we revisit the question asked by Linzen et al.",
"(2016) : as RNNs model word sequences without explicit notions of hierarchical structure, Figure 1 : An example of the number agreement task with two attractors and a subject-verb distance of five.",
"to what extent are these models able to learn non-local syntactic dependencies in natural language?",
"Identifying number agreement between subjects and verbs-especially in the presence of attractors-can be understood as a cognitivelymotivated probe that seeks to distinguish hierarchical theories from sequential ones, as models that rely on sequential cues like the most recent noun would favor the incorrect verb form.",
"We provide an example of this task in Fig.",
"1 , where the plural form of the verb have agrees with the distant subject parts, rather than the adjacent attractors (underlined) of the singular form.",
"Contrary to the findings of Linzen et al.",
"(2016) , our experiments suggest that sequential LSTMs are able to capture structural dependencies to a large extent, even for cases with multiple attractors ( §2).",
"Our finding suggests that network capacity plays a crucial role in capturing structural dependencies with multiple attractors.",
"Nevertheless, we find that a strong character LSTM language model-which lacks explicit word representation and has to capture much longer sequential dependencies in order to learn non-local structural dependencies effectively-performs much worse in the number agreement task.",
"Given the strong performance of word-based LSTM language models, are there are any substantial benefits, in terms of number agreement accuracy, to explicitly modeling hierarchical structures as an inductive bias?",
"We discover that a certain class of LSTM language models that explicitly models syntactic structures, the recurrent neural network grammars (Dyer et al., 2016, RN-NGs) , considerably outperforms sequential LSTM language models for cases with multiple attractors ( §3).",
"We present experiments affirming that this gain is due to an explicit composition operator rather than the presence of predicted syntactic annotations.",
"Rather surprisingly, syntactic LSTM language models without explicit composition have no advantage over sequential LSTMs that operate on word sequences, although these models can nevertheless be excellent predictors of phrase structures (Choe and Charniak, 2016) .",
"Having established the importance of modeling structures, we explore the hypothesis that how we build the structure affects the model's ability to identify structural dependencies in English.",
"As RNNGs build phrase-structure trees through top-down operations, we propose extensions to the structure-building sequences and model architecture that enable left-corner (Henderson, 2003 (Henderson, , 2004 and bottom-up (Chelba and Jelinek, 2000; Emami and Jelinek, 2005) generation orders ( §4).",
"Extensive prior work has characterized topdown, left-corner, and bottom-up parsing strategies in terms of cognitive plausibility (Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992) and neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we move away from the realm of parsing and evaluate the three strategies as models of generation instead, and address the following empirical question: which generation order is most appropriately biased to model structural dependencies in English, as indicated by number agreement accuracy?",
"Our key finding is that the top-down generation outperforms left-corner and bottom-up variants for difficult cases with multiple attractors.",
"In theory, the three traversal strategies approximate the same chain rule that decompose the joint probability of words and phrase-structure trees, denoted as p(x, y), differently and as such will impose different biases on the learner.",
"In §4.3, we show that the three variants achieve similar perplexities on a held-out validation set.",
"As we observe different patterns in number agreement, this demonstrates that while perplexity can be a useful diagnostic tool, it may not be sensitive enough for comparing models in terms of how well they capture grammatical intuitions.",
"Number Agreement with LSTM Language Models We revisit the number agreement task with LSTMs trained on language modeling objectives, as proposed by Linzen et al.",
"(2016) .",
"Experimental Settings.",
"We use the same parsed Wikipedia corpus, verb inflectors, preprocessing steps, and dataset split as Linzen et al.",
"(2016) .",
"1 Word types beyond the most frequent 10,000 are converted to their respective POS tags.",
"We summarize the corpus statistics of the dataset, along with the test set distribution of the number of attractors, in Table 1 .",
"Similar to Linzen et al.",
"(2016) , we only include test cases where all intervening nouns are of the opposite number forms than the subject noun.",
"All models are implemented using the DyNet library (Neubig et al., 2017) .",
"Training was done using a language modeling objective that predicts the next word given the prefix; at test time we compute agreement error rates by comparing the probability of the correct verb form with the incorrect one.",
"We report performance of a few different LSTM hidden layer configurations, while other hyper-parameters are selected based on a grid search.",
"2 Following Linzen Table 2 indicates that, given enough capacity, LSTM language models without explicit syntactic supervision are able to perform well in number agreement.",
"For cases with multiple attractors, we observe that the LSTM language model with 50 hidden units trails behind its larger counterparts by a substantial margin despite comparable performance for zero attractor cases, suggesting that network capacity plays an especially important role in propagating relevant structural information across a large number of steps.",
"5 Our experiment independently derives the same finding as the recent work of Gulordava et al.",
"(2018) , who also find that LSTMs trained with language modeling objectives are able to learn number agreement well; here we additionally identify model capacity as one of the reasons for the discrepancy with the Linzen et al.",
"(2016) results.",
"While the pretrained large-scale language model of Jozefowicz et al.",
"(2016) has certain advantages in terms of model capacity, more training data, and richer vocabulary, we suspect that the poorer performance is due to differences between their training domain and the number agreement testing domain, although the model still performs reasonably well in the number agreement test set.",
"Prior work has confirmed the notion that, in many cases, statistical models are able to achieve good performance under some aggregate metric by overfitting to patterns that are predictive in most cases, often at the expense of more difficult, infrequent instances that require deeper language understanding abilities (Rimell et al., 2009; Jia and Liang, 2017) .",
"In the vast majority of cases, structural dependencies between subjects and verbs highly overlap with sequential dependencies (Table 1).",
"Nevertheless, the fact that number agreement accuracy gets worse as the number of attractors increases is consistent with a sequential recency bias in LSTMs: under this conjecture, identifying the correct structural dependency becomes harder when there are more adjacent nouns of different number forms than the true subject.",
"If the sequential recency conjecture is correct, then LSTMs would perform worse when the structural dependency is more distant in the sequences, compared to cases where the structural dependency is more adjacent.",
"We empirically test this conjecture by running a strong character-based LSTM language model of that achieved state of the art results on EnWiki8 from the Hutter Prize dataset (Hutter, 2012) , with 1,800 hidden units and 10 million parameters.",
"The character LSTM is trained, validated, and tested 6 on the same split of the Linzen et al.",
"(2016) number agreement dataset.",
"A priori, we expect that number agreement is harder for character LSTMs for two reasons.",
"First, character LSTMs lack explicit word representaforms much better for cases with multiple attractors.",
"tions, thus succeeding in this task requires identifying structural dependencies between two sequences of character tokens, while word-based LSTMs only need to resolve dependencies between word tokens.",
"Second, by nature of modeling characters, non-local structural dependencies are sequentially further apart than in the wordbased language model.",
"On the other hand, character LSTMs have the ability to exploit and share informative morphological cues, such as the fact that plural nouns in English tend to end with 's'.",
"As demonstrated on the last row of Table 2 , we find that the character LSTM language model performs much worse at number agreement with multiple attractors compared to its word-based counterparts.",
"This finding is consistent with that of Sennrich (2017) , who find that character-level decoders in neural machine translation perform worse than subword models in capturing morphosyntactic agreement.",
"To some extent, our finding demonstrates the limitations that character LSTMs face in learning structure from language modeling objectives, despite earlier evidence that character LSTM language models are able to implicitly acquire a lexicon (Le Godais et al., 2017) .",
"Number Agreement with RNNGs Given the strong performance of sequential LSTMs in number agreement, is there any further benefit to explicitly modeling hierarchical structures?",
"We focus on recurrent neural network grammars (Dyer et al., 2016, RNNGs) , which jointly model the probability of phrase-structure trees and strings, p(x, y), through structurebuilding actions and explicit compositions for representing completed constituents.",
"Our choice of RNNGs is motivated by the findings of Kuncoro et al.",
"(2017) , who find evidence for syntactic headedness in RNNG phrasal representations.",
"Intuitively, the ability to learn heads is beneficial for this task, as the representation for the noun phrase \"The flowers in the vase\" would be similar to the syntactic head flowers rather than vase.",
"In some sense, the composition operator can be understood as injecting a structural recency bias into the model design, as subjects and verbs that are sequentially apart are encouraged to be close together in the RNNGs' representation.",
"Recurrent Neural Network Grammars RNNGs (Dyer et al., 2016) are language models that estimate the joint probability of string terminals and phrase-structure tree nonterminals.",
"Here we use stack-only RNNGs that achieve better perplexity and parsing performance (Kuncoro et al., 2017) .",
"Given the current stack configuration, the objective function of RNNGs is to predict the correct structure-building operation according to a top-down, left-to-right traversal of the phrasestructure tree; a partial traversal for the input sentence \"The flowers in the vase are blooming\" is illustrated in Fig.",
"3(a) .",
"7 The structural inductive bias of RNNGs derives from an explicit composition operator that represents completed constituents; for instance, the constituent (NP The flowers) is represented by a single composite element on the stack, rather than as four separate symbols.",
"During each REDUCE action, the topmost stack elements that belong to the new constituent are popped from the stack and then composed by the composition function; the composed symbol is then pushed back into the stack.",
"The model is trained in an end-to-end manner by minimizing the cross-entropy loss relative to a sample of gold trees.",
"Experiments Here we summarize the experimental settings of running RNNGs on the number agreement dataset and discuss the empirical findings.",
"Experimental settings.",
"We obtain phrasestructure trees for the Linzen et al.",
"(2016) dataset using a publicly available discriminative model 8 trained on the Penn Treebank (Marcus et al., 1993) .",
"At training time, we use these predicted trees to derive action sequences on the training set, and train the RNNG model on these sequences.",
"9 At test time, we compare the probabilities of the correct and incorrect verb forms given the prefix, which now includes both nonterminal and terminal symbols.",
"An example of the stack contents (i.e.",
"the prefix) when predicting the verb is provided in Fig.",
"3(a) .",
"We similarly run a grid search over the same hyper-parameter range as the sequential LSTM and compare the results with the strongest sequential LSTM baseline from §2.",
"Figure 2 : Number agreement error rates for sequential LSTM language models (left), sequential syntactic LSTM language models (Choe and Charniak, 2016, center), and RNNGs (right).",
"Discussion.",
"Fig.",
"2 shows that RNNGs (rightmost) achieve much better number agreement accuracy compared to LSTM language models (leftmost) for difficult cases with four and five attractors, with around 30% error rate reductions, along with a 13% error rate reduction (from 9% to 7.8%) for three attractors.",
"We attribute the slightly worse performance of RNNGs on cases with zero and one attractor to the presence of intervening structure-building actions that separate the subject and the verb, such as a REDUCE (step 6 in Fig.",
"3(a) ) action to complete the noun phrase and at least one action to predict a verb phrase (step 15 in Fig.",
"3(a) ) before the verb itself is introduced, while LSTM language models benefit from shorter dependencies for zero and one attractor cases.",
"The performance gain of RNNGs might arise from two potential causes.",
"First, RNNGs have access to predicted syntactic annotations, while LSTM language models operate solely on word sequences.",
"Second, RNNGs incorporate explicit compositions, which encourage hierarhical representations and potentially the discovery of syntactic (rather than sequential) dependencies.",
"Would LSTMs that have access to syntactic annotations, but without the explicit composition function, benefit from the same performance gain as RNNGs?",
"To answer this question, we run sequential LSTMs over the same phrase-structure trees (Choe and Charniak, 2016) , similarly estimating the joint probability of phrase-structure nonterminals and string terminals but without an explicit composition operator.",
"Taking the example in Fig.",
"3(a) , the sequential syntactic LSTM would have fifteen 10 symbols on the LSTM when predicting the verb, as opposed to three symbols in the case of RNNGs' stack LSTM.",
"In theory, the sequential LSTM over the phrase-structure trees (Choe and Charniak, 2016) may be able to incorporate a similar, albeit implicit, composition process as RNNGs and consequently derive similarly syntactic heads, although there is no inductive bias that explicitly encourages such process.",
"Fig.",
"2 suggests that the sequential syntactic LSTMs (center) perform comparably with sequential LSTMs without syntax for multiple attractor cases, and worse than RNNGs for nearly all attractors; the gap is highest for multiple attractors.",
"This result showcases the importance of an explicit composition operator and hierarchical representations in identifying structural dependencies, as indicated by number agreement accuracy.",
"Our finding is consistent with the recent work of Yogatama et al.",
"(2018) , who find that introducing elements of hierarchical modeling through a stackstructured memory is beneficial for number agreement, outperforming LSTM language models and attention-augmented variants by increasing margins as the number of attractor grows.",
"Further Analysis In order to better interpret the results, we conduct further analysis into the perplexities of each model, followed by a discussion on the effect of incrementality constraints on the RNNG when predicting number agreement.",
"Perplexity.",
"To what extent does the success of RNNGs in the number agreement task with multiple attractors correlate with better performance under the perplexity metric?",
"We answer this question by using an importance sampling marginalization procedure (Dyer et al., 2016) to obtain an estimate of p(x) under both RNNGs and the sequential syntactic LSTM model.",
"Following Dyer et al.",
"(2016) , for each sentence on the validation set we sample 100 candidate trees from a discriminative model 11 as our proposal distribution.",
"As demonstrated in Table 3 , the LSTM language model has the lowest validation set perplexity despite substantially worse performance than RN-NGs in number agreement with multiple attractors, suggesting that lower perplexity is not neces-sarily correlated with number agreement success.",
"Incrementality constraints.",
"As the syntactic prefix was derived from a discriminative model that has access to unprocessed words, one potential concern is that this prefix might violate the incrementality constraints and benefit the RNNG over the LSTM language model.",
"To address this concern, we remark that the empirical evidence from Fig.",
"2 and Table 3 indicates that the LSTM language model without syntactic annotation outperforms the sequential LSTM with syntactic annotation in terms of both perplexity and number agreement throughout nearly all attractor settings, suggesting that the predicted syntactic prefix does not give any unfair advantage to the syntactic models.",
"Furthermore, we run an experiment where the syntactic prefix is instead derived from an incremental beam search procedure of Fried et al.",
"(2017) .",
"12 To this end, we take the highest scoring beam entry at the time that the verb is generated to be the syntactic prefix; this procedure is applied to both the correct and incorrect verb forms.",
"13 We then similarly compare the probabilities of the correct and incorrect verb form given each respective syntactic prefix to obtain number agreement accuracy.",
"Our finding suggests that using the fully incremental tree prefix leads to even better RNNG number agreement performance for four and five attractors, achieving 7.1% and 8.2% error rates, respectively, compared to 9.4% and 12% for the RNNG error rates in Fig.",
"2 .",
"Top-Down, Left-Corner, and Bottom-Up Traversals In this section, we propose two new variants of RNNGs that construct trees using a different con- 12 As the beam search procedure is time-consuming, we randomly sample 500 cases for each attractor and compute the number agreement accuracy on these samples.",
"13 Consequently, the correct and incorrect forms of the sentence might have different partial trees, as the highest scoring beam entries may be different for each alternative.",
"struction order than the top-down, left-to-right order used above.",
"These are a bottom-up construction order ( §4.1) and a left-corner construction order ( §4.2), analogous to the well-known parsing strategies (e.g.",
"Hale, 2014, chapter 3).",
"They differ from these classic strategies insofar as they do not announce the phrase-structural content of an entire branch at the same time, adopting instead a node-by-node enumeration reminescent of Markov Grammars (Charniak, 1997) .",
"This stepby-step arrangement extends to the derived string as well; since all variants generate words from left to right, the models can be compared using number agreement as a diagnostic.",
"14 Here we state our hypothesis on why the build order matters.",
"The three generation strategies represent different chain rule decompositions of the joint probability of strings and phrase-structure trees, thereby imposing different biases on the learner.",
"Earlier work in parsing has characterized the plausibility of top-down, left-corner, and bottom-up strategies as viable candidates of human sentence processing, especially in terms of memory constraints and human difficulties with center embedding constructions (Johnson-Laird, 1983; Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992 , inter alia), along with neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we cast the three strategies as models of language generation (Manning and Carpenter, 1997), and focus on the empirical question: which generation order has the most appropriate bias in modeling non-local structural dependencies in English?",
"These alternative orders organize the learning problem so as to yield intermediate states in generation that condition on different aspects of the grammatical structure.",
"In number agreement, this amounts to making an agreement controller, such as the word flowers in Fig.",
"3 , more or less salient.",
"If it is more salient, the model should be better-able to inflect the main verb in agreement with this controller, without getting distracted by the attractors.",
"The three proposed build orders are compared in Fig.",
"3 , showing the respective configurations (i.e.",
"the prefix) when generating the main verb in a sentence with a single attractor.",
"15 In ad-dition, we show concrete action sequences for a simpler sentence in each section.",
"Bottom-Up Traversal In bottom-up traversals, phrases are recursively constructed and labeled with the nonterminal type once all their daughters have been built, as illustrated in Fig.",
"4 .",
"Bottom-up traversals benefit from shorter stack depths compared to top-down due to the lack of incomplete nonterminals.",
"As the commitment to label the nonterminal type of a phrase is delayed until its constituents are complete, this means that the generation of a child node cannot condition on the label of its parent node.",
"In n-ary branching trees, bottom-up completion of constituents requires a procedure for determining how many of the most recent elements on the stack should be daughters of the node that is being constructed.",
"16 Conceptually, rather than having a single REDUCE operation as we have before, we have a complex REDUCE(X, n) operation that must determine the type of the constituent (i.e., X) as well as the number of daughters (i.e., n).",
"In step 5 of Fig.",
"4 , the newly formed NP constituent only covers the terminal worms, and neither the unattached terminal eats nor the constituent (NP The fox) is part of the new noun phrase.",
"We implement this extent decision using a stick-breaking construction-using the stack LSTM encoding, a single-layer feedforward network, and a logistic output layer-which decides whether the top element on the stack should be the leftmost child of the new constituent (i.e.",
"whether or not the new constituent is complete after popping the current topmost stack element), as illustrated in Fig.",
"5 .",
"If not, the process is then repeated after the topmost stack element is popped.",
"Once the extent of the new nonterminal has been decided, we parameterize the decision of the nonterminal label type; in Fig.",
"5 this is an NP.",
"A second difference to top-down generation is that when a single constituent remains on the stack, the sentence is not necessarily complete (see step 3 of Fig.",
"4 for examples where this happens).",
"We thus introduce an explicit STOP action (step 8, Fig.",
"4) , indicating the tree is complete, which is only assigned non-zero probability when the stack has a during the history of the full generation process vary considerably in the invariances and the kinds of actions they predict.",
"16 This mechanism is not necessary with strictly binary branching trees, since each new nonterminal always consists of the two children at the top of the stack.",
"Left-Corner Traversal Left-corner traversals combine some aspects of top-down and bottom-up processing.",
"As illustrated in Fig.",
"6 , this works by first generating the leftmost terminal of the tree, The (step 0), before proceeding bottom-up to predict its parent NP (step 1) and then top-down to predict the rest of its children (step 2).",
"A REDUCE action similarly calls the composition operator once the phrase is complete (e.g.",
"step 3).",
"The complete constituent (NP The fox) is the leftmost child of its parent node, thus an NT SW(S) action is done next (step 4).",
"The NT SW(X) action is similar to the NT(X) from the top-down generator, in that it introduces an open nonterminal node and must be matched later by a corresponding REDUCE operation, but, in addition, swaps the two topmost elements at the top of the stack.",
"This is necessary because the parent nonterminal node is not built until after its left-most child has been constructed.",
"In step 1 of Fig.",
"6 , with a single element The on the stack, the action NT SW(NP) adds the open nonterminal symbol NP to become the topmost stack element, but after applying the swap operator the stack now contains (NP | The (step 2).",
"Experiments We optimize the hyper-parameters of each RNNG variant using grid searches based on validation set perplexity.",
"Table 4 summarizes average stack depths and perplexities 17 on the Linzen et al.",
"(2016) validation set.",
"We evaluate each of the variants in terms of number agreement accuracy as an evidence of its suitability to model structural dependencies in English, presented in Table 5 .",
"To account for randomness in training, we report the error rate summary statistics of ten different runs.",
"Avg.",
"(±sdev)/min/max n=2 n=3 n=4 LM 5.8(±0.2)/5.5/6.0 9.6(±0.7)/8.8/10.1 14.1(±1.2)/13.0/15.3 TD 5.5(±0.4)/4.9/5.8 7.8(±0.6)/7.4/8.0 8.9(±1.1)/7.9/9.8 LC 5.4(±0.3)/5.2/5.5 8.2(±0.4)/7.9/8.7 9.9(±1.3)/8.8/11.5 BU 5.7(±0.3) 5.5/5.8 8.5(±0.7)/8.0/9.3 9.7(±1.1)/9.0/11.3 Table 5 : Number agreement error rates for topdown (TD), left-corner (LC), and bottom-up (BU) RNNGs, broken down by the number of attractors.",
"LM indicates the best sequential language model baseline ( §2).",
"We report the mean, standard deviation, and minimum/maximum of 10 different random seeds of each model.",
"Discussion.",
"In Table 5 , we focus on empirical results for cases where the structural dependencies matter the most, corresponding to cases with two, three, and four attractors.",
"All three RNNG variants outperform the sequential LSTM language model baseline for these cases.",
"Nevertheless, the top-down variant outperforms both left-corner and bottom-up strategies for difficult cases with three or more attractors, suggesting that the top-down strategy is most appropriately biased to model difficult number agreement dependencies in English.",
"We run an approximate randomization test by stratifying the output and permuting within each stratum (Yeh, 2000) and find that, for four attractors, the performance difference between the top-down RNNG and the other variants is statistically significant at p < 0.05.",
"The success of the top-down traversal in the domain of number-agreement prediction is consistent with a classical view in parsing that argues top-down parsing is the most human-like parsing strategy since it is the most anticipatory.",
"Only anticipatory representations, it is said, could explain the rapid, incremental processing that humans seem to exhibit (Marslen-Wilson, 1973; Tanenhaus et al., 1995) ; this line of thinking similarly motivates Charniak (2010) , among others.",
"While most work in this domain has been concerned with the parsing problem, our findings suggest that anticipatory mechanisms are also beneficial in capturing structural dependencies in language modeling.",
"We note that our results are achieved using models that, in theory, are able to condition on the entire derivation history, while earlier work in sentence processing has focused on cognitive memory considerations, such as the memory-bounded model of Schuler et al.",
"(2010) .",
"Conclusion Given enough capacity, LSTMs trained on language modeling objectives are able to learn syntax-sensitive dependencies, as evidenced by accurate number agreement accuracy with multiple attractors.",
"Despite this strong performance, we discover explicit modeling of structure does improve the model's ability to discover non-local structural dependencies when determining the distribution over subsequent word generation.",
"Recurrent neural network grammars (RNNGs), which jointly model phrase-structure trees and strings and employ an explicit composition operator, substantially outperform LSTM language models and syntactic language models without explicit compositions; this highlights the importance of a hierarchical inductive bias in capturing structural dependencies.",
"We explore the possibility that how the structure is built affects number agreement performance.",
"Through novel extensions to RNNGs that enable the use of left-corner and bottom-up generation strategies, we discover that this is indeed the case: the three RNNG variants have different generalization properties for number agreement, with the top-down traversal strategy performing best for cases with multiple attractors."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"4.3",
"5"
],
"paper_header_content": [
"Introduction",
"Number Agreement with LSTM Language Models",
"Number Agreement with RNNGs",
"Recurrent Neural Network Grammars",
"Experiments",
"Further Analysis",
"Top-Down, Left-Corner, and Bottom-Up Traversals",
"Bottom-Up Traversal",
"Left-Corner Traversal",
"Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-30#paper-1041#slide-17 | Perplexity | RNNGs LSTM LM has the best perplexity
despite worse number agreement performance
LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | RNNGs LSTM LM has the best perplexity
despite worse number agreement performance
LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | [] |
GEM-SciDuet-train-30#paper-1041#slide-18 | 1041 | LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modeling Structure Makes Them Better | Language exhibits hierarchical structure, but recent work using a subject-verb agreement diagnostic argued that state-ofthe-art language models, LSTMs, fail to learn long-range syntax-sensitive dependencies. Using the same diagnostic, we show that, in fact, LSTMs do succeed in learning such dependencies-provided they have enough capacity. We then explore whether models that have access to explicit syntactic information learn agreement more effectively, and how the way in which this structural information is incorporated into the model impacts performance. We find that the mere presence of syntactic information does not improve accuracy, but when model architecture is determined by syntax, number agreement is improved. Further, we find that the choice of how syntactic structure is built affects how well number agreement is learned: top-down construction outperforms leftcorner and bottom-up variants in capturing long-distance structural dependencies. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189
],
"paper_content_text": [
"Introduction Recurrent neural networks (RNNs) are remarkably effective models of sequential data.",
"Recent years have witnessed the widespread adoption of recurrent architectures such as LSTMs (Hochreiter and Schmidhuber, 1997) in various NLP tasks, with state of the art results in language modeling and conditional generation tasks like machine translation (Bahdanau et al., 2015) and text summarization (See et al., 2017) .",
"Here we revisit the question asked by Linzen et al.",
"(2016) : as RNNs model word sequences without explicit notions of hierarchical structure, Figure 1 : An example of the number agreement task with two attractors and a subject-verb distance of five.",
"to what extent are these models able to learn non-local syntactic dependencies in natural language?",
"Identifying number agreement between subjects and verbs-especially in the presence of attractors-can be understood as a cognitivelymotivated probe that seeks to distinguish hierarchical theories from sequential ones, as models that rely on sequential cues like the most recent noun would favor the incorrect verb form.",
"We provide an example of this task in Fig.",
"1 , where the plural form of the verb have agrees with the distant subject parts, rather than the adjacent attractors (underlined) of the singular form.",
"Contrary to the findings of Linzen et al.",
"(2016) , our experiments suggest that sequential LSTMs are able to capture structural dependencies to a large extent, even for cases with multiple attractors ( §2).",
"Our finding suggests that network capacity plays a crucial role in capturing structural dependencies with multiple attractors.",
"Nevertheless, we find that a strong character LSTM language model-which lacks explicit word representation and has to capture much longer sequential dependencies in order to learn non-local structural dependencies effectively-performs much worse in the number agreement task.",
"Given the strong performance of word-based LSTM language models, are there are any substantial benefits, in terms of number agreement accuracy, to explicitly modeling hierarchical structures as an inductive bias?",
"We discover that a certain class of LSTM language models that explicitly models syntactic structures, the recurrent neural network grammars (Dyer et al., 2016, RN-NGs) , considerably outperforms sequential LSTM language models for cases with multiple attractors ( §3).",
"We present experiments affirming that this gain is due to an explicit composition operator rather than the presence of predicted syntactic annotations.",
"Rather surprisingly, syntactic LSTM language models without explicit composition have no advantage over sequential LSTMs that operate on word sequences, although these models can nevertheless be excellent predictors of phrase structures (Choe and Charniak, 2016) .",
"Having established the importance of modeling structures, we explore the hypothesis that how we build the structure affects the model's ability to identify structural dependencies in English.",
"As RNNGs build phrase-structure trees through top-down operations, we propose extensions to the structure-building sequences and model architecture that enable left-corner (Henderson, 2003 (Henderson, , 2004 and bottom-up (Chelba and Jelinek, 2000; Emami and Jelinek, 2005) generation orders ( §4).",
"Extensive prior work has characterized topdown, left-corner, and bottom-up parsing strategies in terms of cognitive plausibility (Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992) and neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we move away from the realm of parsing and evaluate the three strategies as models of generation instead, and address the following empirical question: which generation order is most appropriately biased to model structural dependencies in English, as indicated by number agreement accuracy?",
"Our key finding is that the top-down generation outperforms left-corner and bottom-up variants for difficult cases with multiple attractors.",
"In theory, the three traversal strategies approximate the same chain rule that decompose the joint probability of words and phrase-structure trees, denoted as p(x, y), differently and as such will impose different biases on the learner.",
"In §4.3, we show that the three variants achieve similar perplexities on a held-out validation set.",
"As we observe different patterns in number agreement, this demonstrates that while perplexity can be a useful diagnostic tool, it may not be sensitive enough for comparing models in terms of how well they capture grammatical intuitions.",
"Number Agreement with LSTM Language Models We revisit the number agreement task with LSTMs trained on language modeling objectives, as proposed by Linzen et al.",
"(2016) .",
"Experimental Settings.",
"We use the same parsed Wikipedia corpus, verb inflectors, preprocessing steps, and dataset split as Linzen et al.",
"(2016) .",
"1 Word types beyond the most frequent 10,000 are converted to their respective POS tags.",
"We summarize the corpus statistics of the dataset, along with the test set distribution of the number of attractors, in Table 1 .",
"Similar to Linzen et al.",
"(2016) , we only include test cases where all intervening nouns are of the opposite number forms than the subject noun.",
"All models are implemented using the DyNet library (Neubig et al., 2017) .",
"Training was done using a language modeling objective that predicts the next word given the prefix; at test time we compute agreement error rates by comparing the probability of the correct verb form with the incorrect one.",
"We report performance of a few different LSTM hidden layer configurations, while other hyper-parameters are selected based on a grid search.",
"2 Following Linzen Table 2 indicates that, given enough capacity, LSTM language models without explicit syntactic supervision are able to perform well in number agreement.",
"For cases with multiple attractors, we observe that the LSTM language model with 50 hidden units trails behind its larger counterparts by a substantial margin despite comparable performance for zero attractor cases, suggesting that network capacity plays an especially important role in propagating relevant structural information across a large number of steps.",
"5 Our experiment independently derives the same finding as the recent work of Gulordava et al.",
"(2018) , who also find that LSTMs trained with language modeling objectives are able to learn number agreement well; here we additionally identify model capacity as one of the reasons for the discrepancy with the Linzen et al.",
"(2016) results.",
"While the pretrained large-scale language model of Jozefowicz et al.",
"(2016) has certain advantages in terms of model capacity, more training data, and richer vocabulary, we suspect that the poorer performance is due to differences between their training domain and the number agreement testing domain, although the model still performs reasonably well in the number agreement test set.",
"Prior work has confirmed the notion that, in many cases, statistical models are able to achieve good performance under some aggregate metric by overfitting to patterns that are predictive in most cases, often at the expense of more difficult, infrequent instances that require deeper language understanding abilities (Rimell et al., 2009; Jia and Liang, 2017) .",
"In the vast majority of cases, structural dependencies between subjects and verbs highly overlap with sequential dependencies (Table 1).",
"Nevertheless, the fact that number agreement accuracy gets worse as the number of attractors increases is consistent with a sequential recency bias in LSTMs: under this conjecture, identifying the correct structural dependency becomes harder when there are more adjacent nouns of different number forms than the true subject.",
"If the sequential recency conjecture is correct, then LSTMs would perform worse when the structural dependency is more distant in the sequences, compared to cases where the structural dependency is more adjacent.",
"We empirically test this conjecture by running a strong character-based LSTM language model of that achieved state of the art results on EnWiki8 from the Hutter Prize dataset (Hutter, 2012) , with 1,800 hidden units and 10 million parameters.",
"The character LSTM is trained, validated, and tested 6 on the same split of the Linzen et al.",
"(2016) number agreement dataset.",
"A priori, we expect that number agreement is harder for character LSTMs for two reasons.",
"First, character LSTMs lack explicit word representaforms much better for cases with multiple attractors.",
"tions, thus succeeding in this task requires identifying structural dependencies between two sequences of character tokens, while word-based LSTMs only need to resolve dependencies between word tokens.",
"Second, by nature of modeling characters, non-local structural dependencies are sequentially further apart than in the wordbased language model.",
"On the other hand, character LSTMs have the ability to exploit and share informative morphological cues, such as the fact that plural nouns in English tend to end with 's'.",
"As demonstrated on the last row of Table 2 , we find that the character LSTM language model performs much worse at number agreement with multiple attractors compared to its word-based counterparts.",
"This finding is consistent with that of Sennrich (2017) , who find that character-level decoders in neural machine translation perform worse than subword models in capturing morphosyntactic agreement.",
"To some extent, our finding demonstrates the limitations that character LSTMs face in learning structure from language modeling objectives, despite earlier evidence that character LSTM language models are able to implicitly acquire a lexicon (Le Godais et al., 2017) .",
"Number Agreement with RNNGs Given the strong performance of sequential LSTMs in number agreement, is there any further benefit to explicitly modeling hierarchical structures?",
"We focus on recurrent neural network grammars (Dyer et al., 2016, RNNGs) , which jointly model the probability of phrase-structure trees and strings, p(x, y), through structurebuilding actions and explicit compositions for representing completed constituents.",
"Our choice of RNNGs is motivated by the findings of Kuncoro et al.",
"(2017) , who find evidence for syntactic headedness in RNNG phrasal representations.",
"Intuitively, the ability to learn heads is beneficial for this task, as the representation for the noun phrase \"The flowers in the vase\" would be similar to the syntactic head flowers rather than vase.",
"In some sense, the composition operator can be understood as injecting a structural recency bias into the model design, as subjects and verbs that are sequentially apart are encouraged to be close together in the RNNGs' representation.",
"Recurrent Neural Network Grammars RNNGs (Dyer et al., 2016) are language models that estimate the joint probability of string terminals and phrase-structure tree nonterminals.",
"Here we use stack-only RNNGs that achieve better perplexity and parsing performance (Kuncoro et al., 2017) .",
"Given the current stack configuration, the objective function of RNNGs is to predict the correct structure-building operation according to a top-down, left-to-right traversal of the phrasestructure tree; a partial traversal for the input sentence \"The flowers in the vase are blooming\" is illustrated in Fig.",
"3(a) .",
"7 The structural inductive bias of RNNGs derives from an explicit composition operator that represents completed constituents; for instance, the constituent (NP The flowers) is represented by a single composite element on the stack, rather than as four separate symbols.",
"During each REDUCE action, the topmost stack elements that belong to the new constituent are popped from the stack and then composed by the composition function; the composed symbol is then pushed back into the stack.",
"The model is trained in an end-to-end manner by minimizing the cross-entropy loss relative to a sample of gold trees.",
"Experiments Here we summarize the experimental settings of running RNNGs on the number agreement dataset and discuss the empirical findings.",
"Experimental settings.",
"We obtain phrasestructure trees for the Linzen et al.",
"(2016) dataset using a publicly available discriminative model 8 trained on the Penn Treebank (Marcus et al., 1993) .",
"At training time, we use these predicted trees to derive action sequences on the training set, and train the RNNG model on these sequences.",
"9 At test time, we compare the probabilities of the correct and incorrect verb forms given the prefix, which now includes both nonterminal and terminal symbols.",
"An example of the stack contents (i.e.",
"the prefix) when predicting the verb is provided in Fig.",
"3(a) .",
"We similarly run a grid search over the same hyper-parameter range as the sequential LSTM and compare the results with the strongest sequential LSTM baseline from §2.",
"Figure 2 : Number agreement error rates for sequential LSTM language models (left), sequential syntactic LSTM language models (Choe and Charniak, 2016, center), and RNNGs (right).",
"Discussion.",
"Fig.",
"2 shows that RNNGs (rightmost) achieve much better number agreement accuracy compared to LSTM language models (leftmost) for difficult cases with four and five attractors, with around 30% error rate reductions, along with a 13% error rate reduction (from 9% to 7.8%) for three attractors.",
"We attribute the slightly worse performance of RNNGs on cases with zero and one attractor to the presence of intervening structure-building actions that separate the subject and the verb, such as a REDUCE (step 6 in Fig.",
"3(a) ) action to complete the noun phrase and at least one action to predict a verb phrase (step 15 in Fig.",
"3(a) ) before the verb itself is introduced, while LSTM language models benefit from shorter dependencies for zero and one attractor cases.",
"The performance gain of RNNGs might arise from two potential causes.",
"First, RNNGs have access to predicted syntactic annotations, while LSTM language models operate solely on word sequences.",
"Second, RNNGs incorporate explicit compositions, which encourage hierarhical representations and potentially the discovery of syntactic (rather than sequential) dependencies.",
"Would LSTMs that have access to syntactic annotations, but without the explicit composition function, benefit from the same performance gain as RNNGs?",
"To answer this question, we run sequential LSTMs over the same phrase-structure trees (Choe and Charniak, 2016) , similarly estimating the joint probability of phrase-structure nonterminals and string terminals but without an explicit composition operator.",
"Taking the example in Fig.",
"3(a) , the sequential syntactic LSTM would have fifteen 10 symbols on the LSTM when predicting the verb, as opposed to three symbols in the case of RNNGs' stack LSTM.",
"In theory, the sequential LSTM over the phrase-structure trees (Choe and Charniak, 2016) may be able to incorporate a similar, albeit implicit, composition process as RNNGs and consequently derive similarly syntactic heads, although there is no inductive bias that explicitly encourages such process.",
"Fig.",
"2 suggests that the sequential syntactic LSTMs (center) perform comparably with sequential LSTMs without syntax for multiple attractor cases, and worse than RNNGs for nearly all attractors; the gap is highest for multiple attractors.",
"This result showcases the importance of an explicit composition operator and hierarchical representations in identifying structural dependencies, as indicated by number agreement accuracy.",
"Our finding is consistent with the recent work of Yogatama et al.",
"(2018) , who find that introducing elements of hierarchical modeling through a stackstructured memory is beneficial for number agreement, outperforming LSTM language models and attention-augmented variants by increasing margins as the number of attractor grows.",
"Further Analysis In order to better interpret the results, we conduct further analysis into the perplexities of each model, followed by a discussion on the effect of incrementality constraints on the RNNG when predicting number agreement.",
"Perplexity.",
"To what extent does the success of RNNGs in the number agreement task with multiple attractors correlate with better performance under the perplexity metric?",
"We answer this question by using an importance sampling marginalization procedure (Dyer et al., 2016) to obtain an estimate of p(x) under both RNNGs and the sequential syntactic LSTM model.",
"Following Dyer et al.",
"(2016) , for each sentence on the validation set we sample 100 candidate trees from a discriminative model 11 as our proposal distribution.",
"As demonstrated in Table 3 , the LSTM language model has the lowest validation set perplexity despite substantially worse performance than RN-NGs in number agreement with multiple attractors, suggesting that lower perplexity is not neces-sarily correlated with number agreement success.",
"Incrementality constraints.",
"As the syntactic prefix was derived from a discriminative model that has access to unprocessed words, one potential concern is that this prefix might violate the incrementality constraints and benefit the RNNG over the LSTM language model.",
"To address this concern, we remark that the empirical evidence from Fig.",
"2 and Table 3 indicates that the LSTM language model without syntactic annotation outperforms the sequential LSTM with syntactic annotation in terms of both perplexity and number agreement throughout nearly all attractor settings, suggesting that the predicted syntactic prefix does not give any unfair advantage to the syntactic models.",
"Furthermore, we run an experiment where the syntactic prefix is instead derived from an incremental beam search procedure of Fried et al.",
"(2017) .",
"12 To this end, we take the highest scoring beam entry at the time that the verb is generated to be the syntactic prefix; this procedure is applied to both the correct and incorrect verb forms.",
"13 We then similarly compare the probabilities of the correct and incorrect verb form given each respective syntactic prefix to obtain number agreement accuracy.",
"Our finding suggests that using the fully incremental tree prefix leads to even better RNNG number agreement performance for four and five attractors, achieving 7.1% and 8.2% error rates, respectively, compared to 9.4% and 12% for the RNNG error rates in Fig.",
"2 .",
"Top-Down, Left-Corner, and Bottom-Up Traversals In this section, we propose two new variants of RNNGs that construct trees using a different con- 12 As the beam search procedure is time-consuming, we randomly sample 500 cases for each attractor and compute the number agreement accuracy on these samples.",
"13 Consequently, the correct and incorrect forms of the sentence might have different partial trees, as the highest scoring beam entries may be different for each alternative.",
"struction order than the top-down, left-to-right order used above.",
"These are a bottom-up construction order ( §4.1) and a left-corner construction order ( §4.2), analogous to the well-known parsing strategies (e.g.",
"Hale, 2014, chapter 3).",
"They differ from these classic strategies insofar as they do not announce the phrase-structural content of an entire branch at the same time, adopting instead a node-by-node enumeration reminescent of Markov Grammars (Charniak, 1997) .",
"This stepby-step arrangement extends to the derived string as well; since all variants generate words from left to right, the models can be compared using number agreement as a diagnostic.",
"14 Here we state our hypothesis on why the build order matters.",
"The three generation strategies represent different chain rule decompositions of the joint probability of strings and phrase-structure trees, thereby imposing different biases on the learner.",
"Earlier work in parsing has characterized the plausibility of top-down, left-corner, and bottom-up strategies as viable candidates of human sentence processing, especially in terms of memory constraints and human difficulties with center embedding constructions (Johnson-Laird, 1983; Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992 , inter alia), along with neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we cast the three strategies as models of language generation (Manning and Carpenter, 1997), and focus on the empirical question: which generation order has the most appropriate bias in modeling non-local structural dependencies in English?",
"These alternative orders organize the learning problem so as to yield intermediate states in generation that condition on different aspects of the grammatical structure.",
"In number agreement, this amounts to making an agreement controller, such as the word flowers in Fig.",
"3 , more or less salient.",
"If it is more salient, the model should be better-able to inflect the main verb in agreement with this controller, without getting distracted by the attractors.",
"The three proposed build orders are compared in Fig.",
"3 , showing the respective configurations (i.e.",
"the prefix) when generating the main verb in a sentence with a single attractor.",
"15 In ad-dition, we show concrete action sequences for a simpler sentence in each section.",
"Bottom-Up Traversal In bottom-up traversals, phrases are recursively constructed and labeled with the nonterminal type once all their daughters have been built, as illustrated in Fig.",
"4 .",
"Bottom-up traversals benefit from shorter stack depths compared to top-down due to the lack of incomplete nonterminals.",
"As the commitment to label the nonterminal type of a phrase is delayed until its constituents are complete, this means that the generation of a child node cannot condition on the label of its parent node.",
"In n-ary branching trees, bottom-up completion of constituents requires a procedure for determining how many of the most recent elements on the stack should be daughters of the node that is being constructed.",
"16 Conceptually, rather than having a single REDUCE operation as we have before, we have a complex REDUCE(X, n) operation that must determine the type of the constituent (i.e., X) as well as the number of daughters (i.e., n).",
"In step 5 of Fig.",
"4 , the newly formed NP constituent only covers the terminal worms, and neither the unattached terminal eats nor the constituent (NP The fox) is part of the new noun phrase.",
"We implement this extent decision using a stick-breaking construction-using the stack LSTM encoding, a single-layer feedforward network, and a logistic output layer-which decides whether the top element on the stack should be the leftmost child of the new constituent (i.e.",
"whether or not the new constituent is complete after popping the current topmost stack element), as illustrated in Fig.",
"5 .",
"If not, the process is then repeated after the topmost stack element is popped.",
"Once the extent of the new nonterminal has been decided, we parameterize the decision of the nonterminal label type; in Fig.",
"5 this is an NP.",
"A second difference to top-down generation is that when a single constituent remains on the stack, the sentence is not necessarily complete (see step 3 of Fig.",
"4 for examples where this happens).",
"We thus introduce an explicit STOP action (step 8, Fig.",
"4) , indicating the tree is complete, which is only assigned non-zero probability when the stack has a during the history of the full generation process vary considerably in the invariances and the kinds of actions they predict.",
"16 This mechanism is not necessary with strictly binary branching trees, since each new nonterminal always consists of the two children at the top of the stack.",
"Left-Corner Traversal Left-corner traversals combine some aspects of top-down and bottom-up processing.",
"As illustrated in Fig.",
"6 , this works by first generating the leftmost terminal of the tree, The (step 0), before proceeding bottom-up to predict its parent NP (step 1) and then top-down to predict the rest of its children (step 2).",
"A REDUCE action similarly calls the composition operator once the phrase is complete (e.g.",
"step 3).",
"The complete constituent (NP The fox) is the leftmost child of its parent node, thus an NT SW(S) action is done next (step 4).",
"The NT SW(X) action is similar to the NT(X) from the top-down generator, in that it introduces an open nonterminal node and must be matched later by a corresponding REDUCE operation, but, in addition, swaps the two topmost elements at the top of the stack.",
"This is necessary because the parent nonterminal node is not built until after its left-most child has been constructed.",
"In step 1 of Fig.",
"6 , with a single element The on the stack, the action NT SW(NP) adds the open nonterminal symbol NP to become the topmost stack element, but after applying the swap operator the stack now contains (NP | The (step 2).",
"Experiments We optimize the hyper-parameters of each RNNG variant using grid searches based on validation set perplexity.",
"Table 4 summarizes average stack depths and perplexities 17 on the Linzen et al.",
"(2016) validation set.",
"We evaluate each of the variants in terms of number agreement accuracy as an evidence of its suitability to model structural dependencies in English, presented in Table 5 .",
"To account for randomness in training, we report the error rate summary statistics of ten different runs.",
"Avg.",
"(±sdev)/min/max n=2 n=3 n=4 LM 5.8(±0.2)/5.5/6.0 9.6(±0.7)/8.8/10.1 14.1(±1.2)/13.0/15.3 TD 5.5(±0.4)/4.9/5.8 7.8(±0.6)/7.4/8.0 8.9(±1.1)/7.9/9.8 LC 5.4(±0.3)/5.2/5.5 8.2(±0.4)/7.9/8.7 9.9(±1.3)/8.8/11.5 BU 5.7(±0.3) 5.5/5.8 8.5(±0.7)/8.0/9.3 9.7(±1.1)/9.0/11.3 Table 5 : Number agreement error rates for topdown (TD), left-corner (LC), and bottom-up (BU) RNNGs, broken down by the number of attractors.",
"LM indicates the best sequential language model baseline ( §2).",
"We report the mean, standard deviation, and minimum/maximum of 10 different random seeds of each model.",
"Discussion.",
"In Table 5 , we focus on empirical results for cases where the structural dependencies matter the most, corresponding to cases with two, three, and four attractors.",
"All three RNNG variants outperform the sequential LSTM language model baseline for these cases.",
"Nevertheless, the top-down variant outperforms both left-corner and bottom-up strategies for difficult cases with three or more attractors, suggesting that the top-down strategy is most appropriately biased to model difficult number agreement dependencies in English.",
"We run an approximate randomization test by stratifying the output and permuting within each stratum (Yeh, 2000) and find that, for four attractors, the performance difference between the top-down RNNG and the other variants is statistically significant at p < 0.05.",
"The success of the top-down traversal in the domain of number-agreement prediction is consistent with a classical view in parsing that argues top-down parsing is the most human-like parsing strategy since it is the most anticipatory.",
"Only anticipatory representations, it is said, could explain the rapid, incremental processing that humans seem to exhibit (Marslen-Wilson, 1973; Tanenhaus et al., 1995) ; this line of thinking similarly motivates Charniak (2010) , among others.",
"While most work in this domain has been concerned with the parsing problem, our findings suggest that anticipatory mechanisms are also beneficial in capturing structural dependencies in language modeling.",
"We note that our results are achieved using models that, in theory, are able to condition on the entire derivation history, while earlier work in sentence processing has focused on cognitive memory considerations, such as the memory-bounded model of Schuler et al.",
"(2010) .",
"Conclusion Given enough capacity, LSTMs trained on language modeling objectives are able to learn syntax-sensitive dependencies, as evidenced by accurate number agreement accuracy with multiple attractors.",
"Despite this strong performance, we discover explicit modeling of structure does improve the model's ability to discover non-local structural dependencies when determining the distribution over subsequent word generation.",
"Recurrent neural network grammars (RNNGs), which jointly model phrase-structure trees and strings and employ an explicit composition operator, substantially outperform LSTM language models and syntactic language models without explicit compositions; this highlights the importance of a hierarchical inductive bias in capturing structural dependencies.",
"We explore the possibility that how the structure is built affects number agreement performance.",
"Through novel extensions to RNNGs that enable the use of left-corner and bottom-up generation strategies, we discover that this is indeed the case: the three RNNG variants have different generalization properties for number agreement, with the top-down traversal strategy performing best for cases with multiple attractors."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"4.3",
"5"
],
"paper_header_content": [
"Introduction",
"Number Agreement with LSTM Language Models",
"Number Agreement with RNNGs",
"Recurrent Neural Network Grammars",
"Experiments",
"Further Analysis",
"Top-Down, Left-Corner, and Bottom-Up Traversals",
"Bottom-Up Traversal",
"Left-Corner Traversal",
"Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-30#paper-1041#slide-18 | Further Remarks Confound in the Dataset | LSTM language models largely succeed in number agreement
In around of cases with multiple attractors, the agreement controller coincides with the first noun.
Key question: How do LSTMs succeed in this task?
Identifying the syntactic structure Memorising the first noun
LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | LSTM language models largely succeed in number agreement
In around of cases with multiple attractors, the agreement controller coincides with the first noun.
Key question: How do LSTMs succeed in this task?
Identifying the syntactic structure Memorising the first noun
LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | [] |
GEM-SciDuet-train-30#paper-1041#slide-19 | 1041 | LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modeling Structure Makes Them Better | Language exhibits hierarchical structure, but recent work using a subject-verb agreement diagnostic argued that state-ofthe-art language models, LSTMs, fail to learn long-range syntax-sensitive dependencies. Using the same diagnostic, we show that, in fact, LSTMs do succeed in learning such dependencies-provided they have enough capacity. We then explore whether models that have access to explicit syntactic information learn agreement more effectively, and how the way in which this structural information is incorporated into the model impacts performance. We find that the mere presence of syntactic information does not improve accuracy, but when model architecture is determined by syntax, number agreement is improved. Further, we find that the choice of how syntactic structure is built affects how well number agreement is learned: top-down construction outperforms leftcorner and bottom-up variants in capturing long-distance structural dependencies. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189
],
"paper_content_text": [
"Introduction Recurrent neural networks (RNNs) are remarkably effective models of sequential data.",
"Recent years have witnessed the widespread adoption of recurrent architectures such as LSTMs (Hochreiter and Schmidhuber, 1997) in various NLP tasks, with state of the art results in language modeling and conditional generation tasks like machine translation (Bahdanau et al., 2015) and text summarization (See et al., 2017) .",
"Here we revisit the question asked by Linzen et al.",
"(2016) : as RNNs model word sequences without explicit notions of hierarchical structure, Figure 1 : An example of the number agreement task with two attractors and a subject-verb distance of five.",
"to what extent are these models able to learn non-local syntactic dependencies in natural language?",
"Identifying number agreement between subjects and verbs-especially in the presence of attractors-can be understood as a cognitivelymotivated probe that seeks to distinguish hierarchical theories from sequential ones, as models that rely on sequential cues like the most recent noun would favor the incorrect verb form.",
"We provide an example of this task in Fig.",
"1 , where the plural form of the verb have agrees with the distant subject parts, rather than the adjacent attractors (underlined) of the singular form.",
"Contrary to the findings of Linzen et al.",
"(2016) , our experiments suggest that sequential LSTMs are able to capture structural dependencies to a large extent, even for cases with multiple attractors ( §2).",
"Our finding suggests that network capacity plays a crucial role in capturing structural dependencies with multiple attractors.",
"Nevertheless, we find that a strong character LSTM language model-which lacks explicit word representation and has to capture much longer sequential dependencies in order to learn non-local structural dependencies effectively-performs much worse in the number agreement task.",
"Given the strong performance of word-based LSTM language models, are there are any substantial benefits, in terms of number agreement accuracy, to explicitly modeling hierarchical structures as an inductive bias?",
"We discover that a certain class of LSTM language models that explicitly models syntactic structures, the recurrent neural network grammars (Dyer et al., 2016, RN-NGs) , considerably outperforms sequential LSTM language models for cases with multiple attractors ( §3).",
"We present experiments affirming that this gain is due to an explicit composition operator rather than the presence of predicted syntactic annotations.",
"Rather surprisingly, syntactic LSTM language models without explicit composition have no advantage over sequential LSTMs that operate on word sequences, although these models can nevertheless be excellent predictors of phrase structures (Choe and Charniak, 2016) .",
"Having established the importance of modeling structures, we explore the hypothesis that how we build the structure affects the model's ability to identify structural dependencies in English.",
"As RNNGs build phrase-structure trees through top-down operations, we propose extensions to the structure-building sequences and model architecture that enable left-corner (Henderson, 2003 (Henderson, , 2004 and bottom-up (Chelba and Jelinek, 2000; Emami and Jelinek, 2005) generation orders ( §4).",
"Extensive prior work has characterized topdown, left-corner, and bottom-up parsing strategies in terms of cognitive plausibility (Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992) and neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we move away from the realm of parsing and evaluate the three strategies as models of generation instead, and address the following empirical question: which generation order is most appropriately biased to model structural dependencies in English, as indicated by number agreement accuracy?",
"Our key finding is that the top-down generation outperforms left-corner and bottom-up variants for difficult cases with multiple attractors.",
"In theory, the three traversal strategies approximate the same chain rule that decompose the joint probability of words and phrase-structure trees, denoted as p(x, y), differently and as such will impose different biases on the learner.",
"In §4.3, we show that the three variants achieve similar perplexities on a held-out validation set.",
"As we observe different patterns in number agreement, this demonstrates that while perplexity can be a useful diagnostic tool, it may not be sensitive enough for comparing models in terms of how well they capture grammatical intuitions.",
"Number Agreement with LSTM Language Models We revisit the number agreement task with LSTMs trained on language modeling objectives, as proposed by Linzen et al.",
"(2016) .",
"Experimental Settings.",
"We use the same parsed Wikipedia corpus, verb inflectors, preprocessing steps, and dataset split as Linzen et al.",
"(2016) .",
"1 Word types beyond the most frequent 10,000 are converted to their respective POS tags.",
"We summarize the corpus statistics of the dataset, along with the test set distribution of the number of attractors, in Table 1 .",
"Similar to Linzen et al.",
"(2016) , we only include test cases where all intervening nouns are of the opposite number forms than the subject noun.",
"All models are implemented using the DyNet library (Neubig et al., 2017) .",
"Training was done using a language modeling objective that predicts the next word given the prefix; at test time we compute agreement error rates by comparing the probability of the correct verb form with the incorrect one.",
"We report performance of a few different LSTM hidden layer configurations, while other hyper-parameters are selected based on a grid search.",
"2 Following Linzen Table 2 indicates that, given enough capacity, LSTM language models without explicit syntactic supervision are able to perform well in number agreement.",
"For cases with multiple attractors, we observe that the LSTM language model with 50 hidden units trails behind its larger counterparts by a substantial margin despite comparable performance for zero attractor cases, suggesting that network capacity plays an especially important role in propagating relevant structural information across a large number of steps.",
"5 Our experiment independently derives the same finding as the recent work of Gulordava et al.",
"(2018) , who also find that LSTMs trained with language modeling objectives are able to learn number agreement well; here we additionally identify model capacity as one of the reasons for the discrepancy with the Linzen et al.",
"(2016) results.",
"While the pretrained large-scale language model of Jozefowicz et al.",
"(2016) has certain advantages in terms of model capacity, more training data, and richer vocabulary, we suspect that the poorer performance is due to differences between their training domain and the number agreement testing domain, although the model still performs reasonably well in the number agreement test set.",
"Prior work has confirmed the notion that, in many cases, statistical models are able to achieve good performance under some aggregate metric by overfitting to patterns that are predictive in most cases, often at the expense of more difficult, infrequent instances that require deeper language understanding abilities (Rimell et al., 2009; Jia and Liang, 2017) .",
"In the vast majority of cases, structural dependencies between subjects and verbs highly overlap with sequential dependencies (Table 1).",
"Nevertheless, the fact that number agreement accuracy gets worse as the number of attractors increases is consistent with a sequential recency bias in LSTMs: under this conjecture, identifying the correct structural dependency becomes harder when there are more adjacent nouns of different number forms than the true subject.",
"If the sequential recency conjecture is correct, then LSTMs would perform worse when the structural dependency is more distant in the sequences, compared to cases where the structural dependency is more adjacent.",
"We empirically test this conjecture by running a strong character-based LSTM language model of that achieved state of the art results on EnWiki8 from the Hutter Prize dataset (Hutter, 2012) , with 1,800 hidden units and 10 million parameters.",
"The character LSTM is trained, validated, and tested 6 on the same split of the Linzen et al.",
"(2016) number agreement dataset.",
"A priori, we expect that number agreement is harder for character LSTMs for two reasons.",
"First, character LSTMs lack explicit word representaforms much better for cases with multiple attractors.",
"tions, thus succeeding in this task requires identifying structural dependencies between two sequences of character tokens, while word-based LSTMs only need to resolve dependencies between word tokens.",
"Second, by nature of modeling characters, non-local structural dependencies are sequentially further apart than in the wordbased language model.",
"On the other hand, character LSTMs have the ability to exploit and share informative morphological cues, such as the fact that plural nouns in English tend to end with 's'.",
"As demonstrated on the last row of Table 2 , we find that the character LSTM language model performs much worse at number agreement with multiple attractors compared to its word-based counterparts.",
"This finding is consistent with that of Sennrich (2017) , who find that character-level decoders in neural machine translation perform worse than subword models in capturing morphosyntactic agreement.",
"To some extent, our finding demonstrates the limitations that character LSTMs face in learning structure from language modeling objectives, despite earlier evidence that character LSTM language models are able to implicitly acquire a lexicon (Le Godais et al., 2017) .",
"Number Agreement with RNNGs Given the strong performance of sequential LSTMs in number agreement, is there any further benefit to explicitly modeling hierarchical structures?",
"We focus on recurrent neural network grammars (Dyer et al., 2016, RNNGs) , which jointly model the probability of phrase-structure trees and strings, p(x, y), through structurebuilding actions and explicit compositions for representing completed constituents.",
"Our choice of RNNGs is motivated by the findings of Kuncoro et al.",
"(2017) , who find evidence for syntactic headedness in RNNG phrasal representations.",
"Intuitively, the ability to learn heads is beneficial for this task, as the representation for the noun phrase \"The flowers in the vase\" would be similar to the syntactic head flowers rather than vase.",
"In some sense, the composition operator can be understood as injecting a structural recency bias into the model design, as subjects and verbs that are sequentially apart are encouraged to be close together in the RNNGs' representation.",
"Recurrent Neural Network Grammars RNNGs (Dyer et al., 2016) are language models that estimate the joint probability of string terminals and phrase-structure tree nonterminals.",
"Here we use stack-only RNNGs that achieve better perplexity and parsing performance (Kuncoro et al., 2017) .",
"Given the current stack configuration, the objective function of RNNGs is to predict the correct structure-building operation according to a top-down, left-to-right traversal of the phrasestructure tree; a partial traversal for the input sentence \"The flowers in the vase are blooming\" is illustrated in Fig.",
"3(a) .",
"7 The structural inductive bias of RNNGs derives from an explicit composition operator that represents completed constituents; for instance, the constituent (NP The flowers) is represented by a single composite element on the stack, rather than as four separate symbols.",
"During each REDUCE action, the topmost stack elements that belong to the new constituent are popped from the stack and then composed by the composition function; the composed symbol is then pushed back into the stack.",
"The model is trained in an end-to-end manner by minimizing the cross-entropy loss relative to a sample of gold trees.",
"Experiments Here we summarize the experimental settings of running RNNGs on the number agreement dataset and discuss the empirical findings.",
"Experimental settings.",
"We obtain phrasestructure trees for the Linzen et al.",
"(2016) dataset using a publicly available discriminative model 8 trained on the Penn Treebank (Marcus et al., 1993) .",
"At training time, we use these predicted trees to derive action sequences on the training set, and train the RNNG model on these sequences.",
"9 At test time, we compare the probabilities of the correct and incorrect verb forms given the prefix, which now includes both nonterminal and terminal symbols.",
"An example of the stack contents (i.e.",
"the prefix) when predicting the verb is provided in Fig.",
"3(a) .",
"We similarly run a grid search over the same hyper-parameter range as the sequential LSTM and compare the results with the strongest sequential LSTM baseline from §2.",
"Figure 2 : Number agreement error rates for sequential LSTM language models (left), sequential syntactic LSTM language models (Choe and Charniak, 2016, center), and RNNGs (right).",
"Discussion.",
"Fig.",
"2 shows that RNNGs (rightmost) achieve much better number agreement accuracy compared to LSTM language models (leftmost) for difficult cases with four and five attractors, with around 30% error rate reductions, along with a 13% error rate reduction (from 9% to 7.8%) for three attractors.",
"We attribute the slightly worse performance of RNNGs on cases with zero and one attractor to the presence of intervening structure-building actions that separate the subject and the verb, such as a REDUCE (step 6 in Fig.",
"3(a) ) action to complete the noun phrase and at least one action to predict a verb phrase (step 15 in Fig.",
"3(a) ) before the verb itself is introduced, while LSTM language models benefit from shorter dependencies for zero and one attractor cases.",
"The performance gain of RNNGs might arise from two potential causes.",
"First, RNNGs have access to predicted syntactic annotations, while LSTM language models operate solely on word sequences.",
"Second, RNNGs incorporate explicit compositions, which encourage hierarhical representations and potentially the discovery of syntactic (rather than sequential) dependencies.",
"Would LSTMs that have access to syntactic annotations, but without the explicit composition function, benefit from the same performance gain as RNNGs?",
"To answer this question, we run sequential LSTMs over the same phrase-structure trees (Choe and Charniak, 2016) , similarly estimating the joint probability of phrase-structure nonterminals and string terminals but without an explicit composition operator.",
"Taking the example in Fig.",
"3(a) , the sequential syntactic LSTM would have fifteen 10 symbols on the LSTM when predicting the verb, as opposed to three symbols in the case of RNNGs' stack LSTM.",
"In theory, the sequential LSTM over the phrase-structure trees (Choe and Charniak, 2016) may be able to incorporate a similar, albeit implicit, composition process as RNNGs and consequently derive similarly syntactic heads, although there is no inductive bias that explicitly encourages such process.",
"Fig.",
"2 suggests that the sequential syntactic LSTMs (center) perform comparably with sequential LSTMs without syntax for multiple attractor cases, and worse than RNNGs for nearly all attractors; the gap is highest for multiple attractors.",
"This result showcases the importance of an explicit composition operator and hierarchical representations in identifying structural dependencies, as indicated by number agreement accuracy.",
"Our finding is consistent with the recent work of Yogatama et al.",
"(2018) , who find that introducing elements of hierarchical modeling through a stackstructured memory is beneficial for number agreement, outperforming LSTM language models and attention-augmented variants by increasing margins as the number of attractor grows.",
"Further Analysis In order to better interpret the results, we conduct further analysis into the perplexities of each model, followed by a discussion on the effect of incrementality constraints on the RNNG when predicting number agreement.",
"Perplexity.",
"To what extent does the success of RNNGs in the number agreement task with multiple attractors correlate with better performance under the perplexity metric?",
"We answer this question by using an importance sampling marginalization procedure (Dyer et al., 2016) to obtain an estimate of p(x) under both RNNGs and the sequential syntactic LSTM model.",
"Following Dyer et al.",
"(2016) , for each sentence on the validation set we sample 100 candidate trees from a discriminative model 11 as our proposal distribution.",
"As demonstrated in Table 3 , the LSTM language model has the lowest validation set perplexity despite substantially worse performance than RN-NGs in number agreement with multiple attractors, suggesting that lower perplexity is not neces-sarily correlated with number agreement success.",
"Incrementality constraints.",
"As the syntactic prefix was derived from a discriminative model that has access to unprocessed words, one potential concern is that this prefix might violate the incrementality constraints and benefit the RNNG over the LSTM language model.",
"To address this concern, we remark that the empirical evidence from Fig.",
"2 and Table 3 indicates that the LSTM language model without syntactic annotation outperforms the sequential LSTM with syntactic annotation in terms of both perplexity and number agreement throughout nearly all attractor settings, suggesting that the predicted syntactic prefix does not give any unfair advantage to the syntactic models.",
"Furthermore, we run an experiment where the syntactic prefix is instead derived from an incremental beam search procedure of Fried et al.",
"(2017) .",
"12 To this end, we take the highest scoring beam entry at the time that the verb is generated to be the syntactic prefix; this procedure is applied to both the correct and incorrect verb forms.",
"13 We then similarly compare the probabilities of the correct and incorrect verb form given each respective syntactic prefix to obtain number agreement accuracy.",
"Our finding suggests that using the fully incremental tree prefix leads to even better RNNG number agreement performance for four and five attractors, achieving 7.1% and 8.2% error rates, respectively, compared to 9.4% and 12% for the RNNG error rates in Fig.",
"2 .",
"Top-Down, Left-Corner, and Bottom-Up Traversals In this section, we propose two new variants of RNNGs that construct trees using a different con- 12 As the beam search procedure is time-consuming, we randomly sample 500 cases for each attractor and compute the number agreement accuracy on these samples.",
"13 Consequently, the correct and incorrect forms of the sentence might have different partial trees, as the highest scoring beam entries may be different for each alternative.",
"struction order than the top-down, left-to-right order used above.",
"These are a bottom-up construction order ( §4.1) and a left-corner construction order ( §4.2), analogous to the well-known parsing strategies (e.g.",
"Hale, 2014, chapter 3).",
"They differ from these classic strategies insofar as they do not announce the phrase-structural content of an entire branch at the same time, adopting instead a node-by-node enumeration reminescent of Markov Grammars (Charniak, 1997) .",
"This stepby-step arrangement extends to the derived string as well; since all variants generate words from left to right, the models can be compared using number agreement as a diagnostic.",
"14 Here we state our hypothesis on why the build order matters.",
"The three generation strategies represent different chain rule decompositions of the joint probability of strings and phrase-structure trees, thereby imposing different biases on the learner.",
"Earlier work in parsing has characterized the plausibility of top-down, left-corner, and bottom-up strategies as viable candidates of human sentence processing, especially in terms of memory constraints and human difficulties with center embedding constructions (Johnson-Laird, 1983; Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992 , inter alia), along with neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we cast the three strategies as models of language generation (Manning and Carpenter, 1997), and focus on the empirical question: which generation order has the most appropriate bias in modeling non-local structural dependencies in English?",
"These alternative orders organize the learning problem so as to yield intermediate states in generation that condition on different aspects of the grammatical structure.",
"In number agreement, this amounts to making an agreement controller, such as the word flowers in Fig.",
"3 , more or less salient.",
"If it is more salient, the model should be better-able to inflect the main verb in agreement with this controller, without getting distracted by the attractors.",
"The three proposed build orders are compared in Fig.",
"3 , showing the respective configurations (i.e.",
"the prefix) when generating the main verb in a sentence with a single attractor.",
"15 In ad-dition, we show concrete action sequences for a simpler sentence in each section.",
"Bottom-Up Traversal In bottom-up traversals, phrases are recursively constructed and labeled with the nonterminal type once all their daughters have been built, as illustrated in Fig.",
"4 .",
"Bottom-up traversals benefit from shorter stack depths compared to top-down due to the lack of incomplete nonterminals.",
"As the commitment to label the nonterminal type of a phrase is delayed until its constituents are complete, this means that the generation of a child node cannot condition on the label of its parent node.",
"In n-ary branching trees, bottom-up completion of constituents requires a procedure for determining how many of the most recent elements on the stack should be daughters of the node that is being constructed.",
"16 Conceptually, rather than having a single REDUCE operation as we have before, we have a complex REDUCE(X, n) operation that must determine the type of the constituent (i.e., X) as well as the number of daughters (i.e., n).",
"In step 5 of Fig.",
"4 , the newly formed NP constituent only covers the terminal worms, and neither the unattached terminal eats nor the constituent (NP The fox) is part of the new noun phrase.",
"We implement this extent decision using a stick-breaking construction-using the stack LSTM encoding, a single-layer feedforward network, and a logistic output layer-which decides whether the top element on the stack should be the leftmost child of the new constituent (i.e.",
"whether or not the new constituent is complete after popping the current topmost stack element), as illustrated in Fig.",
"5 .",
"If not, the process is then repeated after the topmost stack element is popped.",
"Once the extent of the new nonterminal has been decided, we parameterize the decision of the nonterminal label type; in Fig.",
"5 this is an NP.",
"A second difference to top-down generation is that when a single constituent remains on the stack, the sentence is not necessarily complete (see step 3 of Fig.",
"4 for examples where this happens).",
"We thus introduce an explicit STOP action (step 8, Fig.",
"4) , indicating the tree is complete, which is only assigned non-zero probability when the stack has a during the history of the full generation process vary considerably in the invariances and the kinds of actions they predict.",
"16 This mechanism is not necessary with strictly binary branching trees, since each new nonterminal always consists of the two children at the top of the stack.",
"Left-Corner Traversal Left-corner traversals combine some aspects of top-down and bottom-up processing.",
"As illustrated in Fig.",
"6 , this works by first generating the leftmost terminal of the tree, The (step 0), before proceeding bottom-up to predict its parent NP (step 1) and then top-down to predict the rest of its children (step 2).",
"A REDUCE action similarly calls the composition operator once the phrase is complete (e.g.",
"step 3).",
"The complete constituent (NP The fox) is the leftmost child of its parent node, thus an NT SW(S) action is done next (step 4).",
"The NT SW(X) action is similar to the NT(X) from the top-down generator, in that it introduces an open nonterminal node and must be matched later by a corresponding REDUCE operation, but, in addition, swaps the two topmost elements at the top of the stack.",
"This is necessary because the parent nonterminal node is not built until after its left-most child has been constructed.",
"In step 1 of Fig.",
"6 , with a single element The on the stack, the action NT SW(NP) adds the open nonterminal symbol NP to become the topmost stack element, but after applying the swap operator the stack now contains (NP | The (step 2).",
"Experiments We optimize the hyper-parameters of each RNNG variant using grid searches based on validation set perplexity.",
"Table 4 summarizes average stack depths and perplexities 17 on the Linzen et al.",
"(2016) validation set.",
"We evaluate each of the variants in terms of number agreement accuracy as an evidence of its suitability to model structural dependencies in English, presented in Table 5 .",
"To account for randomness in training, we report the error rate summary statistics of ten different runs.",
"Avg.",
"(±sdev)/min/max n=2 n=3 n=4 LM 5.8(±0.2)/5.5/6.0 9.6(±0.7)/8.8/10.1 14.1(±1.2)/13.0/15.3 TD 5.5(±0.4)/4.9/5.8 7.8(±0.6)/7.4/8.0 8.9(±1.1)/7.9/9.8 LC 5.4(±0.3)/5.2/5.5 8.2(±0.4)/7.9/8.7 9.9(±1.3)/8.8/11.5 BU 5.7(±0.3) 5.5/5.8 8.5(±0.7)/8.0/9.3 9.7(±1.1)/9.0/11.3 Table 5 : Number agreement error rates for topdown (TD), left-corner (LC), and bottom-up (BU) RNNGs, broken down by the number of attractors.",
"LM indicates the best sequential language model baseline ( §2).",
"We report the mean, standard deviation, and minimum/maximum of 10 different random seeds of each model.",
"Discussion.",
"In Table 5 , we focus on empirical results for cases where the structural dependencies matter the most, corresponding to cases with two, three, and four attractors.",
"All three RNNG variants outperform the sequential LSTM language model baseline for these cases.",
"Nevertheless, the top-down variant outperforms both left-corner and bottom-up strategies for difficult cases with three or more attractors, suggesting that the top-down strategy is most appropriately biased to model difficult number agreement dependencies in English.",
"We run an approximate randomization test by stratifying the output and permuting within each stratum (Yeh, 2000) and find that, for four attractors, the performance difference between the top-down RNNG and the other variants is statistically significant at p < 0.05.",
"The success of the top-down traversal in the domain of number-agreement prediction is consistent with a classical view in parsing that argues top-down parsing is the most human-like parsing strategy since it is the most anticipatory.",
"Only anticipatory representations, it is said, could explain the rapid, incremental processing that humans seem to exhibit (Marslen-Wilson, 1973; Tanenhaus et al., 1995) ; this line of thinking similarly motivates Charniak (2010) , among others.",
"While most work in this domain has been concerned with the parsing problem, our findings suggest that anticipatory mechanisms are also beneficial in capturing structural dependencies in language modeling.",
"We note that our results are achieved using models that, in theory, are able to condition on the entire derivation history, while earlier work in sentence processing has focused on cognitive memory considerations, such as the memory-bounded model of Schuler et al.",
"(2010) .",
"Conclusion Given enough capacity, LSTMs trained on language modeling objectives are able to learn syntax-sensitive dependencies, as evidenced by accurate number agreement accuracy with multiple attractors.",
"Despite this strong performance, we discover explicit modeling of structure does improve the model's ability to discover non-local structural dependencies when determining the distribution over subsequent word generation.",
"Recurrent neural network grammars (RNNGs), which jointly model phrase-structure trees and strings and employ an explicit composition operator, substantially outperform LSTM language models and syntactic language models without explicit compositions; this highlights the importance of a hierarchical inductive bias in capturing structural dependencies.",
"We explore the possibility that how the structure is built affects number agreement performance.",
"Through novel extensions to RNNGs that enable the use of left-corner and bottom-up generation strategies, we discover that this is indeed the case: the three RNNG variants have different generalization properties for number agreement, with the top-down traversal strategy performing best for cases with multiple attractors."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"4.3",
"5"
],
"paper_header_content": [
"Introduction",
"Number Agreement with LSTM Language Models",
"Number Agreement with RNNGs",
"Recurrent Neural Network Grammars",
"Experiments",
"Further Analysis",
"Top-Down, Left-Corner, and Bottom-Up Traversals",
"Bottom-Up Traversal",
"Left-Corner Traversal",
"Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-30#paper-1041#slide-19 | Control Condition Experiments for LSTM LM | Con trol condition breaks the corre lation between the first noun and agreement controller
Confounded by first nouns
Much less likely to affect human experiments
LSTMs Ca n Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | Con trol condition breaks the corre lation between the first noun and agreement controller
Confounded by first nouns
Much less likely to affect human experiments
LSTMs Ca n Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | [] |
GEM-SciDuet-train-30#paper-1041#slide-20 | 1041 | LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modeling Structure Makes Them Better | Language exhibits hierarchical structure, but recent work using a subject-verb agreement diagnostic argued that state-ofthe-art language models, LSTMs, fail to learn long-range syntax-sensitive dependencies. Using the same diagnostic, we show that, in fact, LSTMs do succeed in learning such dependencies-provided they have enough capacity. We then explore whether models that have access to explicit syntactic information learn agreement more effectively, and how the way in which this structural information is incorporated into the model impacts performance. We find that the mere presence of syntactic information does not improve accuracy, but when model architecture is determined by syntax, number agreement is improved. Further, we find that the choice of how syntactic structure is built affects how well number agreement is learned: top-down construction outperforms leftcorner and bottom-up variants in capturing long-distance structural dependencies. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189
],
"paper_content_text": [
"Introduction Recurrent neural networks (RNNs) are remarkably effective models of sequential data.",
"Recent years have witnessed the widespread adoption of recurrent architectures such as LSTMs (Hochreiter and Schmidhuber, 1997) in various NLP tasks, with state of the art results in language modeling and conditional generation tasks like machine translation (Bahdanau et al., 2015) and text summarization (See et al., 2017) .",
"Here we revisit the question asked by Linzen et al.",
"(2016) : as RNNs model word sequences without explicit notions of hierarchical structure, Figure 1 : An example of the number agreement task with two attractors and a subject-verb distance of five.",
"to what extent are these models able to learn non-local syntactic dependencies in natural language?",
"Identifying number agreement between subjects and verbs-especially in the presence of attractors-can be understood as a cognitivelymotivated probe that seeks to distinguish hierarchical theories from sequential ones, as models that rely on sequential cues like the most recent noun would favor the incorrect verb form.",
"We provide an example of this task in Fig.",
"1 , where the plural form of the verb have agrees with the distant subject parts, rather than the adjacent attractors (underlined) of the singular form.",
"Contrary to the findings of Linzen et al.",
"(2016) , our experiments suggest that sequential LSTMs are able to capture structural dependencies to a large extent, even for cases with multiple attractors ( §2).",
"Our finding suggests that network capacity plays a crucial role in capturing structural dependencies with multiple attractors.",
"Nevertheless, we find that a strong character LSTM language model-which lacks explicit word representation and has to capture much longer sequential dependencies in order to learn non-local structural dependencies effectively-performs much worse in the number agreement task.",
"Given the strong performance of word-based LSTM language models, are there are any substantial benefits, in terms of number agreement accuracy, to explicitly modeling hierarchical structures as an inductive bias?",
"We discover that a certain class of LSTM language models that explicitly models syntactic structures, the recurrent neural network grammars (Dyer et al., 2016, RN-NGs) , considerably outperforms sequential LSTM language models for cases with multiple attractors ( §3).",
"We present experiments affirming that this gain is due to an explicit composition operator rather than the presence of predicted syntactic annotations.",
"Rather surprisingly, syntactic LSTM language models without explicit composition have no advantage over sequential LSTMs that operate on word sequences, although these models can nevertheless be excellent predictors of phrase structures (Choe and Charniak, 2016) .",
"Having established the importance of modeling structures, we explore the hypothesis that how we build the structure affects the model's ability to identify structural dependencies in English.",
"As RNNGs build phrase-structure trees through top-down operations, we propose extensions to the structure-building sequences and model architecture that enable left-corner (Henderson, 2003 (Henderson, , 2004 and bottom-up (Chelba and Jelinek, 2000; Emami and Jelinek, 2005) generation orders ( §4).",
"Extensive prior work has characterized topdown, left-corner, and bottom-up parsing strategies in terms of cognitive plausibility (Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992) and neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we move away from the realm of parsing and evaluate the three strategies as models of generation instead, and address the following empirical question: which generation order is most appropriately biased to model structural dependencies in English, as indicated by number agreement accuracy?",
"Our key finding is that the top-down generation outperforms left-corner and bottom-up variants for difficult cases with multiple attractors.",
"In theory, the three traversal strategies approximate the same chain rule that decompose the joint probability of words and phrase-structure trees, denoted as p(x, y), differently and as such will impose different biases on the learner.",
"In §4.3, we show that the three variants achieve similar perplexities on a held-out validation set.",
"As we observe different patterns in number agreement, this demonstrates that while perplexity can be a useful diagnostic tool, it may not be sensitive enough for comparing models in terms of how well they capture grammatical intuitions.",
"Number Agreement with LSTM Language Models We revisit the number agreement task with LSTMs trained on language modeling objectives, as proposed by Linzen et al.",
"(2016) .",
"Experimental Settings.",
"We use the same parsed Wikipedia corpus, verb inflectors, preprocessing steps, and dataset split as Linzen et al.",
"(2016) .",
"1 Word types beyond the most frequent 10,000 are converted to their respective POS tags.",
"We summarize the corpus statistics of the dataset, along with the test set distribution of the number of attractors, in Table 1 .",
"Similar to Linzen et al.",
"(2016) , we only include test cases where all intervening nouns are of the opposite number forms than the subject noun.",
"All models are implemented using the DyNet library (Neubig et al., 2017) .",
"Training was done using a language modeling objective that predicts the next word given the prefix; at test time we compute agreement error rates by comparing the probability of the correct verb form with the incorrect one.",
"We report performance of a few different LSTM hidden layer configurations, while other hyper-parameters are selected based on a grid search.",
"2 Following Linzen Table 2 indicates that, given enough capacity, LSTM language models without explicit syntactic supervision are able to perform well in number agreement.",
"For cases with multiple attractors, we observe that the LSTM language model with 50 hidden units trails behind its larger counterparts by a substantial margin despite comparable performance for zero attractor cases, suggesting that network capacity plays an especially important role in propagating relevant structural information across a large number of steps.",
"5 Our experiment independently derives the same finding as the recent work of Gulordava et al.",
"(2018) , who also find that LSTMs trained with language modeling objectives are able to learn number agreement well; here we additionally identify model capacity as one of the reasons for the discrepancy with the Linzen et al.",
"(2016) results.",
"While the pretrained large-scale language model of Jozefowicz et al.",
"(2016) has certain advantages in terms of model capacity, more training data, and richer vocabulary, we suspect that the poorer performance is due to differences between their training domain and the number agreement testing domain, although the model still performs reasonably well in the number agreement test set.",
"Prior work has confirmed the notion that, in many cases, statistical models are able to achieve good performance under some aggregate metric by overfitting to patterns that are predictive in most cases, often at the expense of more difficult, infrequent instances that require deeper language understanding abilities (Rimell et al., 2009; Jia and Liang, 2017) .",
"In the vast majority of cases, structural dependencies between subjects and verbs highly overlap with sequential dependencies (Table 1).",
"Nevertheless, the fact that number agreement accuracy gets worse as the number of attractors increases is consistent with a sequential recency bias in LSTMs: under this conjecture, identifying the correct structural dependency becomes harder when there are more adjacent nouns of different number forms than the true subject.",
"If the sequential recency conjecture is correct, then LSTMs would perform worse when the structural dependency is more distant in the sequences, compared to cases where the structural dependency is more adjacent.",
"We empirically test this conjecture by running a strong character-based LSTM language model of that achieved state of the art results on EnWiki8 from the Hutter Prize dataset (Hutter, 2012) , with 1,800 hidden units and 10 million parameters.",
"The character LSTM is trained, validated, and tested 6 on the same split of the Linzen et al.",
"(2016) number agreement dataset.",
"A priori, we expect that number agreement is harder for character LSTMs for two reasons.",
"First, character LSTMs lack explicit word representaforms much better for cases with multiple attractors.",
"tions, thus succeeding in this task requires identifying structural dependencies between two sequences of character tokens, while word-based LSTMs only need to resolve dependencies between word tokens.",
"Second, by nature of modeling characters, non-local structural dependencies are sequentially further apart than in the wordbased language model.",
"On the other hand, character LSTMs have the ability to exploit and share informative morphological cues, such as the fact that plural nouns in English tend to end with 's'.",
"As demonstrated on the last row of Table 2 , we find that the character LSTM language model performs much worse at number agreement with multiple attractors compared to its word-based counterparts.",
"This finding is consistent with that of Sennrich (2017) , who find that character-level decoders in neural machine translation perform worse than subword models in capturing morphosyntactic agreement.",
"To some extent, our finding demonstrates the limitations that character LSTMs face in learning structure from language modeling objectives, despite earlier evidence that character LSTM language models are able to implicitly acquire a lexicon (Le Godais et al., 2017) .",
"Number Agreement with RNNGs Given the strong performance of sequential LSTMs in number agreement, is there any further benefit to explicitly modeling hierarchical structures?",
"We focus on recurrent neural network grammars (Dyer et al., 2016, RNNGs) , which jointly model the probability of phrase-structure trees and strings, p(x, y), through structurebuilding actions and explicit compositions for representing completed constituents.",
"Our choice of RNNGs is motivated by the findings of Kuncoro et al.",
"(2017) , who find evidence for syntactic headedness in RNNG phrasal representations.",
"Intuitively, the ability to learn heads is beneficial for this task, as the representation for the noun phrase \"The flowers in the vase\" would be similar to the syntactic head flowers rather than vase.",
"In some sense, the composition operator can be understood as injecting a structural recency bias into the model design, as subjects and verbs that are sequentially apart are encouraged to be close together in the RNNGs' representation.",
"Recurrent Neural Network Grammars RNNGs (Dyer et al., 2016) are language models that estimate the joint probability of string terminals and phrase-structure tree nonterminals.",
"Here we use stack-only RNNGs that achieve better perplexity and parsing performance (Kuncoro et al., 2017) .",
"Given the current stack configuration, the objective function of RNNGs is to predict the correct structure-building operation according to a top-down, left-to-right traversal of the phrasestructure tree; a partial traversal for the input sentence \"The flowers in the vase are blooming\" is illustrated in Fig.",
"3(a) .",
"7 The structural inductive bias of RNNGs derives from an explicit composition operator that represents completed constituents; for instance, the constituent (NP The flowers) is represented by a single composite element on the stack, rather than as four separate symbols.",
"During each REDUCE action, the topmost stack elements that belong to the new constituent are popped from the stack and then composed by the composition function; the composed symbol is then pushed back into the stack.",
"The model is trained in an end-to-end manner by minimizing the cross-entropy loss relative to a sample of gold trees.",
"Experiments Here we summarize the experimental settings of running RNNGs on the number agreement dataset and discuss the empirical findings.",
"Experimental settings.",
"We obtain phrasestructure trees for the Linzen et al.",
"(2016) dataset using a publicly available discriminative model 8 trained on the Penn Treebank (Marcus et al., 1993) .",
"At training time, we use these predicted trees to derive action sequences on the training set, and train the RNNG model on these sequences.",
"9 At test time, we compare the probabilities of the correct and incorrect verb forms given the prefix, which now includes both nonterminal and terminal symbols.",
"An example of the stack contents (i.e.",
"the prefix) when predicting the verb is provided in Fig.",
"3(a) .",
"We similarly run a grid search over the same hyper-parameter range as the sequential LSTM and compare the results with the strongest sequential LSTM baseline from §2.",
"Figure 2 : Number agreement error rates for sequential LSTM language models (left), sequential syntactic LSTM language models (Choe and Charniak, 2016, center), and RNNGs (right).",
"Discussion.",
"Fig.",
"2 shows that RNNGs (rightmost) achieve much better number agreement accuracy compared to LSTM language models (leftmost) for difficult cases with four and five attractors, with around 30% error rate reductions, along with a 13% error rate reduction (from 9% to 7.8%) for three attractors.",
"We attribute the slightly worse performance of RNNGs on cases with zero and one attractor to the presence of intervening structure-building actions that separate the subject and the verb, such as a REDUCE (step 6 in Fig.",
"3(a) ) action to complete the noun phrase and at least one action to predict a verb phrase (step 15 in Fig.",
"3(a) ) before the verb itself is introduced, while LSTM language models benefit from shorter dependencies for zero and one attractor cases.",
"The performance gain of RNNGs might arise from two potential causes.",
"First, RNNGs have access to predicted syntactic annotations, while LSTM language models operate solely on word sequences.",
"Second, RNNGs incorporate explicit compositions, which encourage hierarhical representations and potentially the discovery of syntactic (rather than sequential) dependencies.",
"Would LSTMs that have access to syntactic annotations, but without the explicit composition function, benefit from the same performance gain as RNNGs?",
"To answer this question, we run sequential LSTMs over the same phrase-structure trees (Choe and Charniak, 2016) , similarly estimating the joint probability of phrase-structure nonterminals and string terminals but without an explicit composition operator.",
"Taking the example in Fig.",
"3(a) , the sequential syntactic LSTM would have fifteen 10 symbols on the LSTM when predicting the verb, as opposed to three symbols in the case of RNNGs' stack LSTM.",
"In theory, the sequential LSTM over the phrase-structure trees (Choe and Charniak, 2016) may be able to incorporate a similar, albeit implicit, composition process as RNNGs and consequently derive similarly syntactic heads, although there is no inductive bias that explicitly encourages such process.",
"Fig.",
"2 suggests that the sequential syntactic LSTMs (center) perform comparably with sequential LSTMs without syntax for multiple attractor cases, and worse than RNNGs for nearly all attractors; the gap is highest for multiple attractors.",
"This result showcases the importance of an explicit composition operator and hierarchical representations in identifying structural dependencies, as indicated by number agreement accuracy.",
"Our finding is consistent with the recent work of Yogatama et al.",
"(2018) , who find that introducing elements of hierarchical modeling through a stackstructured memory is beneficial for number agreement, outperforming LSTM language models and attention-augmented variants by increasing margins as the number of attractor grows.",
"Further Analysis In order to better interpret the results, we conduct further analysis into the perplexities of each model, followed by a discussion on the effect of incrementality constraints on the RNNG when predicting number agreement.",
"Perplexity.",
"To what extent does the success of RNNGs in the number agreement task with multiple attractors correlate with better performance under the perplexity metric?",
"We answer this question by using an importance sampling marginalization procedure (Dyer et al., 2016) to obtain an estimate of p(x) under both RNNGs and the sequential syntactic LSTM model.",
"Following Dyer et al.",
"(2016) , for each sentence on the validation set we sample 100 candidate trees from a discriminative model 11 as our proposal distribution.",
"As demonstrated in Table 3 , the LSTM language model has the lowest validation set perplexity despite substantially worse performance than RN-NGs in number agreement with multiple attractors, suggesting that lower perplexity is not neces-sarily correlated with number agreement success.",
"Incrementality constraints.",
"As the syntactic prefix was derived from a discriminative model that has access to unprocessed words, one potential concern is that this prefix might violate the incrementality constraints and benefit the RNNG over the LSTM language model.",
"To address this concern, we remark that the empirical evidence from Fig.",
"2 and Table 3 indicates that the LSTM language model without syntactic annotation outperforms the sequential LSTM with syntactic annotation in terms of both perplexity and number agreement throughout nearly all attractor settings, suggesting that the predicted syntactic prefix does not give any unfair advantage to the syntactic models.",
"Furthermore, we run an experiment where the syntactic prefix is instead derived from an incremental beam search procedure of Fried et al.",
"(2017) .",
"12 To this end, we take the highest scoring beam entry at the time that the verb is generated to be the syntactic prefix; this procedure is applied to both the correct and incorrect verb forms.",
"13 We then similarly compare the probabilities of the correct and incorrect verb form given each respective syntactic prefix to obtain number agreement accuracy.",
"Our finding suggests that using the fully incremental tree prefix leads to even better RNNG number agreement performance for four and five attractors, achieving 7.1% and 8.2% error rates, respectively, compared to 9.4% and 12% for the RNNG error rates in Fig.",
"2 .",
"Top-Down, Left-Corner, and Bottom-Up Traversals In this section, we propose two new variants of RNNGs that construct trees using a different con- 12 As the beam search procedure is time-consuming, we randomly sample 500 cases for each attractor and compute the number agreement accuracy on these samples.",
"13 Consequently, the correct and incorrect forms of the sentence might have different partial trees, as the highest scoring beam entries may be different for each alternative.",
"struction order than the top-down, left-to-right order used above.",
"These are a bottom-up construction order ( §4.1) and a left-corner construction order ( §4.2), analogous to the well-known parsing strategies (e.g.",
"Hale, 2014, chapter 3).",
"They differ from these classic strategies insofar as they do not announce the phrase-structural content of an entire branch at the same time, adopting instead a node-by-node enumeration reminescent of Markov Grammars (Charniak, 1997) .",
"This stepby-step arrangement extends to the derived string as well; since all variants generate words from left to right, the models can be compared using number agreement as a diagnostic.",
"14 Here we state our hypothesis on why the build order matters.",
"The three generation strategies represent different chain rule decompositions of the joint probability of strings and phrase-structure trees, thereby imposing different biases on the learner.",
"Earlier work in parsing has characterized the plausibility of top-down, left-corner, and bottom-up strategies as viable candidates of human sentence processing, especially in terms of memory constraints and human difficulties with center embedding constructions (Johnson-Laird, 1983; Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992 , inter alia), along with neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we cast the three strategies as models of language generation (Manning and Carpenter, 1997), and focus on the empirical question: which generation order has the most appropriate bias in modeling non-local structural dependencies in English?",
"These alternative orders organize the learning problem so as to yield intermediate states in generation that condition on different aspects of the grammatical structure.",
"In number agreement, this amounts to making an agreement controller, such as the word flowers in Fig.",
"3 , more or less salient.",
"If it is more salient, the model should be better-able to inflect the main verb in agreement with this controller, without getting distracted by the attractors.",
"The three proposed build orders are compared in Fig.",
"3 , showing the respective configurations (i.e.",
"the prefix) when generating the main verb in a sentence with a single attractor.",
"15 In ad-dition, we show concrete action sequences for a simpler sentence in each section.",
"Bottom-Up Traversal In bottom-up traversals, phrases are recursively constructed and labeled with the nonterminal type once all their daughters have been built, as illustrated in Fig.",
"4 .",
"Bottom-up traversals benefit from shorter stack depths compared to top-down due to the lack of incomplete nonterminals.",
"As the commitment to label the nonterminal type of a phrase is delayed until its constituents are complete, this means that the generation of a child node cannot condition on the label of its parent node.",
"In n-ary branching trees, bottom-up completion of constituents requires a procedure for determining how many of the most recent elements on the stack should be daughters of the node that is being constructed.",
"16 Conceptually, rather than having a single REDUCE operation as we have before, we have a complex REDUCE(X, n) operation that must determine the type of the constituent (i.e., X) as well as the number of daughters (i.e., n).",
"In step 5 of Fig.",
"4 , the newly formed NP constituent only covers the terminal worms, and neither the unattached terminal eats nor the constituent (NP The fox) is part of the new noun phrase.",
"We implement this extent decision using a stick-breaking construction-using the stack LSTM encoding, a single-layer feedforward network, and a logistic output layer-which decides whether the top element on the stack should be the leftmost child of the new constituent (i.e.",
"whether or not the new constituent is complete after popping the current topmost stack element), as illustrated in Fig.",
"5 .",
"If not, the process is then repeated after the topmost stack element is popped.",
"Once the extent of the new nonterminal has been decided, we parameterize the decision of the nonterminal label type; in Fig.",
"5 this is an NP.",
"A second difference to top-down generation is that when a single constituent remains on the stack, the sentence is not necessarily complete (see step 3 of Fig.",
"4 for examples where this happens).",
"We thus introduce an explicit STOP action (step 8, Fig.",
"4) , indicating the tree is complete, which is only assigned non-zero probability when the stack has a during the history of the full generation process vary considerably in the invariances and the kinds of actions they predict.",
"16 This mechanism is not necessary with strictly binary branching trees, since each new nonterminal always consists of the two children at the top of the stack.",
"Left-Corner Traversal Left-corner traversals combine some aspects of top-down and bottom-up processing.",
"As illustrated in Fig.",
"6 , this works by first generating the leftmost terminal of the tree, The (step 0), before proceeding bottom-up to predict its parent NP (step 1) and then top-down to predict the rest of its children (step 2).",
"A REDUCE action similarly calls the composition operator once the phrase is complete (e.g.",
"step 3).",
"The complete constituent (NP The fox) is the leftmost child of its parent node, thus an NT SW(S) action is done next (step 4).",
"The NT SW(X) action is similar to the NT(X) from the top-down generator, in that it introduces an open nonterminal node and must be matched later by a corresponding REDUCE operation, but, in addition, swaps the two topmost elements at the top of the stack.",
"This is necessary because the parent nonterminal node is not built until after its left-most child has been constructed.",
"In step 1 of Fig.",
"6 , with a single element The on the stack, the action NT SW(NP) adds the open nonterminal symbol NP to become the topmost stack element, but after applying the swap operator the stack now contains (NP | The (step 2).",
"Experiments We optimize the hyper-parameters of each RNNG variant using grid searches based on validation set perplexity.",
"Table 4 summarizes average stack depths and perplexities 17 on the Linzen et al.",
"(2016) validation set.",
"We evaluate each of the variants in terms of number agreement accuracy as an evidence of its suitability to model structural dependencies in English, presented in Table 5 .",
"To account for randomness in training, we report the error rate summary statistics of ten different runs.",
"Avg.",
"(±sdev)/min/max n=2 n=3 n=4 LM 5.8(±0.2)/5.5/6.0 9.6(±0.7)/8.8/10.1 14.1(±1.2)/13.0/15.3 TD 5.5(±0.4)/4.9/5.8 7.8(±0.6)/7.4/8.0 8.9(±1.1)/7.9/9.8 LC 5.4(±0.3)/5.2/5.5 8.2(±0.4)/7.9/8.7 9.9(±1.3)/8.8/11.5 BU 5.7(±0.3) 5.5/5.8 8.5(±0.7)/8.0/9.3 9.7(±1.1)/9.0/11.3 Table 5 : Number agreement error rates for topdown (TD), left-corner (LC), and bottom-up (BU) RNNGs, broken down by the number of attractors.",
"LM indicates the best sequential language model baseline ( §2).",
"We report the mean, standard deviation, and minimum/maximum of 10 different random seeds of each model.",
"Discussion.",
"In Table 5 , we focus on empirical results for cases where the structural dependencies matter the most, corresponding to cases with two, three, and four attractors.",
"All three RNNG variants outperform the sequential LSTM language model baseline for these cases.",
"Nevertheless, the top-down variant outperforms both left-corner and bottom-up strategies for difficult cases with three or more attractors, suggesting that the top-down strategy is most appropriately biased to model difficult number agreement dependencies in English.",
"We run an approximate randomization test by stratifying the output and permuting within each stratum (Yeh, 2000) and find that, for four attractors, the performance difference between the top-down RNNG and the other variants is statistically significant at p < 0.05.",
"The success of the top-down traversal in the domain of number-agreement prediction is consistent with a classical view in parsing that argues top-down parsing is the most human-like parsing strategy since it is the most anticipatory.",
"Only anticipatory representations, it is said, could explain the rapid, incremental processing that humans seem to exhibit (Marslen-Wilson, 1973; Tanenhaus et al., 1995) ; this line of thinking similarly motivates Charniak (2010) , among others.",
"While most work in this domain has been concerned with the parsing problem, our findings suggest that anticipatory mechanisms are also beneficial in capturing structural dependencies in language modeling.",
"We note that our results are achieved using models that, in theory, are able to condition on the entire derivation history, while earlier work in sentence processing has focused on cognitive memory considerations, such as the memory-bounded model of Schuler et al.",
"(2010) .",
"Conclusion Given enough capacity, LSTMs trained on language modeling objectives are able to learn syntax-sensitive dependencies, as evidenced by accurate number agreement accuracy with multiple attractors.",
"Despite this strong performance, we discover explicit modeling of structure does improve the model's ability to discover non-local structural dependencies when determining the distribution over subsequent word generation.",
"Recurrent neural network grammars (RNNGs), which jointly model phrase-structure trees and strings and employ an explicit composition operator, substantially outperform LSTM language models and syntactic language models without explicit compositions; this highlights the importance of a hierarchical inductive bias in capturing structural dependencies.",
"We explore the possibility that how the structure is built affects number agreement performance.",
"Through novel extensions to RNNGs that enable the use of left-corner and bottom-up generation strategies, we discover that this is indeed the case: the three RNNG variants have different generalization properties for number agreement, with the top-down traversal strategy performing best for cases with multiple attractors."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"4.3",
"5"
],
"paper_header_content": [
"Introduction",
"Number Agreement with LSTM Language Models",
"Number Agreement with RNNGs",
"Recurrent Neural Network Grammars",
"Experiments",
"Further Analysis",
"Top-Down, Left-Corner, and Bottom-Up Traversals",
"Bottom-Up Traversal",
"Left-Corner Traversal",
"Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-30#paper-1041#slide-20 | Control Condition Experiments for RNNG | Control for cues that artificial learners can exploit in a cognitive task.
Adversarial evaluation can better distinguish between models with correct generalisation and those that overfit to surface cues.
Same y-axis scale as LSTM LM
LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | Control for cues that artificial learners can exploit in a cognitive task.
Adversarial evaluation can better distinguish between models with correct generalisation and those that overfit to surface cues.
Same y-axis scale as LSTM LM
LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | [] |
GEM-SciDuet-train-30#paper-1041#slide-21 | 1041 | LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modeling Structure Makes Them Better | Language exhibits hierarchical structure, but recent work using a subject-verb agreement diagnostic argued that state-ofthe-art language models, LSTMs, fail to learn long-range syntax-sensitive dependencies. Using the same diagnostic, we show that, in fact, LSTMs do succeed in learning such dependencies-provided they have enough capacity. We then explore whether models that have access to explicit syntactic information learn agreement more effectively, and how the way in which this structural information is incorporated into the model impacts performance. We find that the mere presence of syntactic information does not improve accuracy, but when model architecture is determined by syntax, number agreement is improved. Further, we find that the choice of how syntactic structure is built affects how well number agreement is learned: top-down construction outperforms leftcorner and bottom-up variants in capturing long-distance structural dependencies. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189
],
"paper_content_text": [
"Introduction Recurrent neural networks (RNNs) are remarkably effective models of sequential data.",
"Recent years have witnessed the widespread adoption of recurrent architectures such as LSTMs (Hochreiter and Schmidhuber, 1997) in various NLP tasks, with state of the art results in language modeling and conditional generation tasks like machine translation (Bahdanau et al., 2015) and text summarization (See et al., 2017) .",
"Here we revisit the question asked by Linzen et al.",
"(2016) : as RNNs model word sequences without explicit notions of hierarchical structure, Figure 1 : An example of the number agreement task with two attractors and a subject-verb distance of five.",
"to what extent are these models able to learn non-local syntactic dependencies in natural language?",
"Identifying number agreement between subjects and verbs-especially in the presence of attractors-can be understood as a cognitivelymotivated probe that seeks to distinguish hierarchical theories from sequential ones, as models that rely on sequential cues like the most recent noun would favor the incorrect verb form.",
"We provide an example of this task in Fig.",
"1 , where the plural form of the verb have agrees with the distant subject parts, rather than the adjacent attractors (underlined) of the singular form.",
"Contrary to the findings of Linzen et al.",
"(2016) , our experiments suggest that sequential LSTMs are able to capture structural dependencies to a large extent, even for cases with multiple attractors ( §2).",
"Our finding suggests that network capacity plays a crucial role in capturing structural dependencies with multiple attractors.",
"Nevertheless, we find that a strong character LSTM language model-which lacks explicit word representation and has to capture much longer sequential dependencies in order to learn non-local structural dependencies effectively-performs much worse in the number agreement task.",
"Given the strong performance of word-based LSTM language models, are there are any substantial benefits, in terms of number agreement accuracy, to explicitly modeling hierarchical structures as an inductive bias?",
"We discover that a certain class of LSTM language models that explicitly models syntactic structures, the recurrent neural network grammars (Dyer et al., 2016, RN-NGs) , considerably outperforms sequential LSTM language models for cases with multiple attractors ( §3).",
"We present experiments affirming that this gain is due to an explicit composition operator rather than the presence of predicted syntactic annotations.",
"Rather surprisingly, syntactic LSTM language models without explicit composition have no advantage over sequential LSTMs that operate on word sequences, although these models can nevertheless be excellent predictors of phrase structures (Choe and Charniak, 2016) .",
"Having established the importance of modeling structures, we explore the hypothesis that how we build the structure affects the model's ability to identify structural dependencies in English.",
"As RNNGs build phrase-structure trees through top-down operations, we propose extensions to the structure-building sequences and model architecture that enable left-corner (Henderson, 2003 (Henderson, , 2004 and bottom-up (Chelba and Jelinek, 2000; Emami and Jelinek, 2005) generation orders ( §4).",
"Extensive prior work has characterized topdown, left-corner, and bottom-up parsing strategies in terms of cognitive plausibility (Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992) and neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we move away from the realm of parsing and evaluate the three strategies as models of generation instead, and address the following empirical question: which generation order is most appropriately biased to model structural dependencies in English, as indicated by number agreement accuracy?",
"Our key finding is that the top-down generation outperforms left-corner and bottom-up variants for difficult cases with multiple attractors.",
"In theory, the three traversal strategies approximate the same chain rule that decompose the joint probability of words and phrase-structure trees, denoted as p(x, y), differently and as such will impose different biases on the learner.",
"In §4.3, we show that the three variants achieve similar perplexities on a held-out validation set.",
"As we observe different patterns in number agreement, this demonstrates that while perplexity can be a useful diagnostic tool, it may not be sensitive enough for comparing models in terms of how well they capture grammatical intuitions.",
"Number Agreement with LSTM Language Models We revisit the number agreement task with LSTMs trained on language modeling objectives, as proposed by Linzen et al.",
"(2016) .",
"Experimental Settings.",
"We use the same parsed Wikipedia corpus, verb inflectors, preprocessing steps, and dataset split as Linzen et al.",
"(2016) .",
"1 Word types beyond the most frequent 10,000 are converted to their respective POS tags.",
"We summarize the corpus statistics of the dataset, along with the test set distribution of the number of attractors, in Table 1 .",
"Similar to Linzen et al.",
"(2016) , we only include test cases where all intervening nouns are of the opposite number forms than the subject noun.",
"All models are implemented using the DyNet library (Neubig et al., 2017) .",
"Training was done using a language modeling objective that predicts the next word given the prefix; at test time we compute agreement error rates by comparing the probability of the correct verb form with the incorrect one.",
"We report performance of a few different LSTM hidden layer configurations, while other hyper-parameters are selected based on a grid search.",
"2 Following Linzen Table 2 indicates that, given enough capacity, LSTM language models without explicit syntactic supervision are able to perform well in number agreement.",
"For cases with multiple attractors, we observe that the LSTM language model with 50 hidden units trails behind its larger counterparts by a substantial margin despite comparable performance for zero attractor cases, suggesting that network capacity plays an especially important role in propagating relevant structural information across a large number of steps.",
"5 Our experiment independently derives the same finding as the recent work of Gulordava et al.",
"(2018) , who also find that LSTMs trained with language modeling objectives are able to learn number agreement well; here we additionally identify model capacity as one of the reasons for the discrepancy with the Linzen et al.",
"(2016) results.",
"While the pretrained large-scale language model of Jozefowicz et al.",
"(2016) has certain advantages in terms of model capacity, more training data, and richer vocabulary, we suspect that the poorer performance is due to differences between their training domain and the number agreement testing domain, although the model still performs reasonably well in the number agreement test set.",
"Prior work has confirmed the notion that, in many cases, statistical models are able to achieve good performance under some aggregate metric by overfitting to patterns that are predictive in most cases, often at the expense of more difficult, infrequent instances that require deeper language understanding abilities (Rimell et al., 2009; Jia and Liang, 2017) .",
"In the vast majority of cases, structural dependencies between subjects and verbs highly overlap with sequential dependencies (Table 1).",
"Nevertheless, the fact that number agreement accuracy gets worse as the number of attractors increases is consistent with a sequential recency bias in LSTMs: under this conjecture, identifying the correct structural dependency becomes harder when there are more adjacent nouns of different number forms than the true subject.",
"If the sequential recency conjecture is correct, then LSTMs would perform worse when the structural dependency is more distant in the sequences, compared to cases where the structural dependency is more adjacent.",
"We empirically test this conjecture by running a strong character-based LSTM language model of that achieved state of the art results on EnWiki8 from the Hutter Prize dataset (Hutter, 2012) , with 1,800 hidden units and 10 million parameters.",
"The character LSTM is trained, validated, and tested 6 on the same split of the Linzen et al.",
"(2016) number agreement dataset.",
"A priori, we expect that number agreement is harder for character LSTMs for two reasons.",
"First, character LSTMs lack explicit word representaforms much better for cases with multiple attractors.",
"tions, thus succeeding in this task requires identifying structural dependencies between two sequences of character tokens, while word-based LSTMs only need to resolve dependencies between word tokens.",
"Second, by nature of modeling characters, non-local structural dependencies are sequentially further apart than in the wordbased language model.",
"On the other hand, character LSTMs have the ability to exploit and share informative morphological cues, such as the fact that plural nouns in English tend to end with 's'.",
"As demonstrated on the last row of Table 2 , we find that the character LSTM language model performs much worse at number agreement with multiple attractors compared to its word-based counterparts.",
"This finding is consistent with that of Sennrich (2017) , who find that character-level decoders in neural machine translation perform worse than subword models in capturing morphosyntactic agreement.",
"To some extent, our finding demonstrates the limitations that character LSTMs face in learning structure from language modeling objectives, despite earlier evidence that character LSTM language models are able to implicitly acquire a lexicon (Le Godais et al., 2017) .",
"Number Agreement with RNNGs Given the strong performance of sequential LSTMs in number agreement, is there any further benefit to explicitly modeling hierarchical structures?",
"We focus on recurrent neural network grammars (Dyer et al., 2016, RNNGs) , which jointly model the probability of phrase-structure trees and strings, p(x, y), through structurebuilding actions and explicit compositions for representing completed constituents.",
"Our choice of RNNGs is motivated by the findings of Kuncoro et al.",
"(2017) , who find evidence for syntactic headedness in RNNG phrasal representations.",
"Intuitively, the ability to learn heads is beneficial for this task, as the representation for the noun phrase \"The flowers in the vase\" would be similar to the syntactic head flowers rather than vase.",
"In some sense, the composition operator can be understood as injecting a structural recency bias into the model design, as subjects and verbs that are sequentially apart are encouraged to be close together in the RNNGs' representation.",
"Recurrent Neural Network Grammars RNNGs (Dyer et al., 2016) are language models that estimate the joint probability of string terminals and phrase-structure tree nonterminals.",
"Here we use stack-only RNNGs that achieve better perplexity and parsing performance (Kuncoro et al., 2017) .",
"Given the current stack configuration, the objective function of RNNGs is to predict the correct structure-building operation according to a top-down, left-to-right traversal of the phrasestructure tree; a partial traversal for the input sentence \"The flowers in the vase are blooming\" is illustrated in Fig.",
"3(a) .",
"7 The structural inductive bias of RNNGs derives from an explicit composition operator that represents completed constituents; for instance, the constituent (NP The flowers) is represented by a single composite element on the stack, rather than as four separate symbols.",
"During each REDUCE action, the topmost stack elements that belong to the new constituent are popped from the stack and then composed by the composition function; the composed symbol is then pushed back into the stack.",
"The model is trained in an end-to-end manner by minimizing the cross-entropy loss relative to a sample of gold trees.",
"Experiments Here we summarize the experimental settings of running RNNGs on the number agreement dataset and discuss the empirical findings.",
"Experimental settings.",
"We obtain phrasestructure trees for the Linzen et al.",
"(2016) dataset using a publicly available discriminative model 8 trained on the Penn Treebank (Marcus et al., 1993) .",
"At training time, we use these predicted trees to derive action sequences on the training set, and train the RNNG model on these sequences.",
"9 At test time, we compare the probabilities of the correct and incorrect verb forms given the prefix, which now includes both nonterminal and terminal symbols.",
"An example of the stack contents (i.e.",
"the prefix) when predicting the verb is provided in Fig.",
"3(a) .",
"We similarly run a grid search over the same hyper-parameter range as the sequential LSTM and compare the results with the strongest sequential LSTM baseline from §2.",
"Figure 2 : Number agreement error rates for sequential LSTM language models (left), sequential syntactic LSTM language models (Choe and Charniak, 2016, center), and RNNGs (right).",
"Discussion.",
"Fig.",
"2 shows that RNNGs (rightmost) achieve much better number agreement accuracy compared to LSTM language models (leftmost) for difficult cases with four and five attractors, with around 30% error rate reductions, along with a 13% error rate reduction (from 9% to 7.8%) for three attractors.",
"We attribute the slightly worse performance of RNNGs on cases with zero and one attractor to the presence of intervening structure-building actions that separate the subject and the verb, such as a REDUCE (step 6 in Fig.",
"3(a) ) action to complete the noun phrase and at least one action to predict a verb phrase (step 15 in Fig.",
"3(a) ) before the verb itself is introduced, while LSTM language models benefit from shorter dependencies for zero and one attractor cases.",
"The performance gain of RNNGs might arise from two potential causes.",
"First, RNNGs have access to predicted syntactic annotations, while LSTM language models operate solely on word sequences.",
"Second, RNNGs incorporate explicit compositions, which encourage hierarhical representations and potentially the discovery of syntactic (rather than sequential) dependencies.",
"Would LSTMs that have access to syntactic annotations, but without the explicit composition function, benefit from the same performance gain as RNNGs?",
"To answer this question, we run sequential LSTMs over the same phrase-structure trees (Choe and Charniak, 2016) , similarly estimating the joint probability of phrase-structure nonterminals and string terminals but without an explicit composition operator.",
"Taking the example in Fig.",
"3(a) , the sequential syntactic LSTM would have fifteen 10 symbols on the LSTM when predicting the verb, as opposed to three symbols in the case of RNNGs' stack LSTM.",
"In theory, the sequential LSTM over the phrase-structure trees (Choe and Charniak, 2016) may be able to incorporate a similar, albeit implicit, composition process as RNNGs and consequently derive similarly syntactic heads, although there is no inductive bias that explicitly encourages such process.",
"Fig.",
"2 suggests that the sequential syntactic LSTMs (center) perform comparably with sequential LSTMs without syntax for multiple attractor cases, and worse than RNNGs for nearly all attractors; the gap is highest for multiple attractors.",
"This result showcases the importance of an explicit composition operator and hierarchical representations in identifying structural dependencies, as indicated by number agreement accuracy.",
"Our finding is consistent with the recent work of Yogatama et al.",
"(2018) , who find that introducing elements of hierarchical modeling through a stackstructured memory is beneficial for number agreement, outperforming LSTM language models and attention-augmented variants by increasing margins as the number of attractor grows.",
"Further Analysis In order to better interpret the results, we conduct further analysis into the perplexities of each model, followed by a discussion on the effect of incrementality constraints on the RNNG when predicting number agreement.",
"Perplexity.",
"To what extent does the success of RNNGs in the number agreement task with multiple attractors correlate with better performance under the perplexity metric?",
"We answer this question by using an importance sampling marginalization procedure (Dyer et al., 2016) to obtain an estimate of p(x) under both RNNGs and the sequential syntactic LSTM model.",
"Following Dyer et al.",
"(2016) , for each sentence on the validation set we sample 100 candidate trees from a discriminative model 11 as our proposal distribution.",
"As demonstrated in Table 3 , the LSTM language model has the lowest validation set perplexity despite substantially worse performance than RN-NGs in number agreement with multiple attractors, suggesting that lower perplexity is not neces-sarily correlated with number agreement success.",
"Incrementality constraints.",
"As the syntactic prefix was derived from a discriminative model that has access to unprocessed words, one potential concern is that this prefix might violate the incrementality constraints and benefit the RNNG over the LSTM language model.",
"To address this concern, we remark that the empirical evidence from Fig.",
"2 and Table 3 indicates that the LSTM language model without syntactic annotation outperforms the sequential LSTM with syntactic annotation in terms of both perplexity and number agreement throughout nearly all attractor settings, suggesting that the predicted syntactic prefix does not give any unfair advantage to the syntactic models.",
"Furthermore, we run an experiment where the syntactic prefix is instead derived from an incremental beam search procedure of Fried et al.",
"(2017) .",
"12 To this end, we take the highest scoring beam entry at the time that the verb is generated to be the syntactic prefix; this procedure is applied to both the correct and incorrect verb forms.",
"13 We then similarly compare the probabilities of the correct and incorrect verb form given each respective syntactic prefix to obtain number agreement accuracy.",
"Our finding suggests that using the fully incremental tree prefix leads to even better RNNG number agreement performance for four and five attractors, achieving 7.1% and 8.2% error rates, respectively, compared to 9.4% and 12% for the RNNG error rates in Fig.",
"2 .",
"Top-Down, Left-Corner, and Bottom-Up Traversals In this section, we propose two new variants of RNNGs that construct trees using a different con- 12 As the beam search procedure is time-consuming, we randomly sample 500 cases for each attractor and compute the number agreement accuracy on these samples.",
"13 Consequently, the correct and incorrect forms of the sentence might have different partial trees, as the highest scoring beam entries may be different for each alternative.",
"struction order than the top-down, left-to-right order used above.",
"These are a bottom-up construction order ( §4.1) and a left-corner construction order ( §4.2), analogous to the well-known parsing strategies (e.g.",
"Hale, 2014, chapter 3).",
"They differ from these classic strategies insofar as they do not announce the phrase-structural content of an entire branch at the same time, adopting instead a node-by-node enumeration reminescent of Markov Grammars (Charniak, 1997) .",
"This stepby-step arrangement extends to the derived string as well; since all variants generate words from left to right, the models can be compared using number agreement as a diagnostic.",
"14 Here we state our hypothesis on why the build order matters.",
"The three generation strategies represent different chain rule decompositions of the joint probability of strings and phrase-structure trees, thereby imposing different biases on the learner.",
"Earlier work in parsing has characterized the plausibility of top-down, left-corner, and bottom-up strategies as viable candidates of human sentence processing, especially in terms of memory constraints and human difficulties with center embedding constructions (Johnson-Laird, 1983; Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992 , inter alia), along with neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we cast the three strategies as models of language generation (Manning and Carpenter, 1997), and focus on the empirical question: which generation order has the most appropriate bias in modeling non-local structural dependencies in English?",
"These alternative orders organize the learning problem so as to yield intermediate states in generation that condition on different aspects of the grammatical structure.",
"In number agreement, this amounts to making an agreement controller, such as the word flowers in Fig.",
"3 , more or less salient.",
"If it is more salient, the model should be better-able to inflect the main verb in agreement with this controller, without getting distracted by the attractors.",
"The three proposed build orders are compared in Fig.",
"3 , showing the respective configurations (i.e.",
"the prefix) when generating the main verb in a sentence with a single attractor.",
"15 In ad-dition, we show concrete action sequences for a simpler sentence in each section.",
"Bottom-Up Traversal In bottom-up traversals, phrases are recursively constructed and labeled with the nonterminal type once all their daughters have been built, as illustrated in Fig.",
"4 .",
"Bottom-up traversals benefit from shorter stack depths compared to top-down due to the lack of incomplete nonterminals.",
"As the commitment to label the nonterminal type of a phrase is delayed until its constituents are complete, this means that the generation of a child node cannot condition on the label of its parent node.",
"In n-ary branching trees, bottom-up completion of constituents requires a procedure for determining how many of the most recent elements on the stack should be daughters of the node that is being constructed.",
"16 Conceptually, rather than having a single REDUCE operation as we have before, we have a complex REDUCE(X, n) operation that must determine the type of the constituent (i.e., X) as well as the number of daughters (i.e., n).",
"In step 5 of Fig.",
"4 , the newly formed NP constituent only covers the terminal worms, and neither the unattached terminal eats nor the constituent (NP The fox) is part of the new noun phrase.",
"We implement this extent decision using a stick-breaking construction-using the stack LSTM encoding, a single-layer feedforward network, and a logistic output layer-which decides whether the top element on the stack should be the leftmost child of the new constituent (i.e.",
"whether or not the new constituent is complete after popping the current topmost stack element), as illustrated in Fig.",
"5 .",
"If not, the process is then repeated after the topmost stack element is popped.",
"Once the extent of the new nonterminal has been decided, we parameterize the decision of the nonterminal label type; in Fig.",
"5 this is an NP.",
"A second difference to top-down generation is that when a single constituent remains on the stack, the sentence is not necessarily complete (see step 3 of Fig.",
"4 for examples where this happens).",
"We thus introduce an explicit STOP action (step 8, Fig.",
"4) , indicating the tree is complete, which is only assigned non-zero probability when the stack has a during the history of the full generation process vary considerably in the invariances and the kinds of actions they predict.",
"16 This mechanism is not necessary with strictly binary branching trees, since each new nonterminal always consists of the two children at the top of the stack.",
"Left-Corner Traversal Left-corner traversals combine some aspects of top-down and bottom-up processing.",
"As illustrated in Fig.",
"6 , this works by first generating the leftmost terminal of the tree, The (step 0), before proceeding bottom-up to predict its parent NP (step 1) and then top-down to predict the rest of its children (step 2).",
"A REDUCE action similarly calls the composition operator once the phrase is complete (e.g.",
"step 3).",
"The complete constituent (NP The fox) is the leftmost child of its parent node, thus an NT SW(S) action is done next (step 4).",
"The NT SW(X) action is similar to the NT(X) from the top-down generator, in that it introduces an open nonterminal node and must be matched later by a corresponding REDUCE operation, but, in addition, swaps the two topmost elements at the top of the stack.",
"This is necessary because the parent nonterminal node is not built until after its left-most child has been constructed.",
"In step 1 of Fig.",
"6 , with a single element The on the stack, the action NT SW(NP) adds the open nonterminal symbol NP to become the topmost stack element, but after applying the swap operator the stack now contains (NP | The (step 2).",
"Experiments We optimize the hyper-parameters of each RNNG variant using grid searches based on validation set perplexity.",
"Table 4 summarizes average stack depths and perplexities 17 on the Linzen et al.",
"(2016) validation set.",
"We evaluate each of the variants in terms of number agreement accuracy as an evidence of its suitability to model structural dependencies in English, presented in Table 5 .",
"To account for randomness in training, we report the error rate summary statistics of ten different runs.",
"Avg.",
"(±sdev)/min/max n=2 n=3 n=4 LM 5.8(±0.2)/5.5/6.0 9.6(±0.7)/8.8/10.1 14.1(±1.2)/13.0/15.3 TD 5.5(±0.4)/4.9/5.8 7.8(±0.6)/7.4/8.0 8.9(±1.1)/7.9/9.8 LC 5.4(±0.3)/5.2/5.5 8.2(±0.4)/7.9/8.7 9.9(±1.3)/8.8/11.5 BU 5.7(±0.3) 5.5/5.8 8.5(±0.7)/8.0/9.3 9.7(±1.1)/9.0/11.3 Table 5 : Number agreement error rates for topdown (TD), left-corner (LC), and bottom-up (BU) RNNGs, broken down by the number of attractors.",
"LM indicates the best sequential language model baseline ( §2).",
"We report the mean, standard deviation, and minimum/maximum of 10 different random seeds of each model.",
"Discussion.",
"In Table 5 , we focus on empirical results for cases where the structural dependencies matter the most, corresponding to cases with two, three, and four attractors.",
"All three RNNG variants outperform the sequential LSTM language model baseline for these cases.",
"Nevertheless, the top-down variant outperforms both left-corner and bottom-up strategies for difficult cases with three or more attractors, suggesting that the top-down strategy is most appropriately biased to model difficult number agreement dependencies in English.",
"We run an approximate randomization test by stratifying the output and permuting within each stratum (Yeh, 2000) and find that, for four attractors, the performance difference between the top-down RNNG and the other variants is statistically significant at p < 0.05.",
"The success of the top-down traversal in the domain of number-agreement prediction is consistent with a classical view in parsing that argues top-down parsing is the most human-like parsing strategy since it is the most anticipatory.",
"Only anticipatory representations, it is said, could explain the rapid, incremental processing that humans seem to exhibit (Marslen-Wilson, 1973; Tanenhaus et al., 1995) ; this line of thinking similarly motivates Charniak (2010) , among others.",
"While most work in this domain has been concerned with the parsing problem, our findings suggest that anticipatory mechanisms are also beneficial in capturing structural dependencies in language modeling.",
"We note that our results are achieved using models that, in theory, are able to condition on the entire derivation history, while earlier work in sentence processing has focused on cognitive memory considerations, such as the memory-bounded model of Schuler et al.",
"(2010) .",
"Conclusion Given enough capacity, LSTMs trained on language modeling objectives are able to learn syntax-sensitive dependencies, as evidenced by accurate number agreement accuracy with multiple attractors.",
"Despite this strong performance, we discover explicit modeling of structure does improve the model's ability to discover non-local structural dependencies when determining the distribution over subsequent word generation.",
"Recurrent neural network grammars (RNNGs), which jointly model phrase-structure trees and strings and employ an explicit composition operator, substantially outperform LSTM language models and syntactic language models without explicit compositions; this highlights the importance of a hierarchical inductive bias in capturing structural dependencies.",
"We explore the possibility that how the structure is built affects number agreement performance.",
"Through novel extensions to RNNGs that enable the use of left-corner and bottom-up generation strategies, we discover that this is indeed the case: the three RNNG variants have different generalization properties for number agreement, with the top-down traversal strategy performing best for cases with multiple attractors."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"4.3",
"5"
],
"paper_header_content": [
"Introduction",
"Number Agreement with LSTM Language Models",
"Number Agreement with RNNGs",
"Recurrent Neural Network Grammars",
"Experiments",
"Further Analysis",
"Top-Down, Left-Corner, and Bottom-Up Traversals",
"Bottom-Up Traversal",
"Left-Corner Traversal",
"Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-30#paper-1041#slide-21 | Related Work | Augmenting our models with a hierarchical inductive bias is not the only way to achieve better number agreement.
Another alternative is to make relevant past information more salient, such as through memory architectures or attention mechanism.
Yogatama et al. (2018) found that both attention mechanism and memory architectures outperform standard LSTMs.
They found that a model with a stack-structured memory performs best, also demonstrating that a hierarchical, nested inductive bias is important for capturing syntactic dependencies.
LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | Augmenting our models with a hierarchical inductive bias is not the only way to achieve better number agreement.
Another alternative is to make relevant past information more salient, such as through memory architectures or attention mechanism.
Yogatama et al. (2018) found that both attention mechanism and memory architectures outperform standard LSTMs.
They found that a model with a stack-structured memory performs best, also demonstrating that a hierarchical, nested inductive bias is important for capturing syntactic dependencies.
LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | [] |
GEM-SciDuet-train-30#paper-1041#slide-22 | 1041 | LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modeling Structure Makes Them Better | Language exhibits hierarchical structure, but recent work using a subject-verb agreement diagnostic argued that state-ofthe-art language models, LSTMs, fail to learn long-range syntax-sensitive dependencies. Using the same diagnostic, we show that, in fact, LSTMs do succeed in learning such dependencies-provided they have enough capacity. We then explore whether models that have access to explicit syntactic information learn agreement more effectively, and how the way in which this structural information is incorporated into the model impacts performance. We find that the mere presence of syntactic information does not improve accuracy, but when model architecture is determined by syntax, number agreement is improved. Further, we find that the choice of how syntactic structure is built affects how well number agreement is learned: top-down construction outperforms leftcorner and bottom-up variants in capturing long-distance structural dependencies. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189
],
"paper_content_text": [
"Introduction Recurrent neural networks (RNNs) are remarkably effective models of sequential data.",
"Recent years have witnessed the widespread adoption of recurrent architectures such as LSTMs (Hochreiter and Schmidhuber, 1997) in various NLP tasks, with state of the art results in language modeling and conditional generation tasks like machine translation (Bahdanau et al., 2015) and text summarization (See et al., 2017) .",
"Here we revisit the question asked by Linzen et al.",
"(2016) : as RNNs model word sequences without explicit notions of hierarchical structure, Figure 1 : An example of the number agreement task with two attractors and a subject-verb distance of five.",
"to what extent are these models able to learn non-local syntactic dependencies in natural language?",
"Identifying number agreement between subjects and verbs-especially in the presence of attractors-can be understood as a cognitivelymotivated probe that seeks to distinguish hierarchical theories from sequential ones, as models that rely on sequential cues like the most recent noun would favor the incorrect verb form.",
"We provide an example of this task in Fig.",
"1 , where the plural form of the verb have agrees with the distant subject parts, rather than the adjacent attractors (underlined) of the singular form.",
"Contrary to the findings of Linzen et al.",
"(2016) , our experiments suggest that sequential LSTMs are able to capture structural dependencies to a large extent, even for cases with multiple attractors ( §2).",
"Our finding suggests that network capacity plays a crucial role in capturing structural dependencies with multiple attractors.",
"Nevertheless, we find that a strong character LSTM language model-which lacks explicit word representation and has to capture much longer sequential dependencies in order to learn non-local structural dependencies effectively-performs much worse in the number agreement task.",
"Given the strong performance of word-based LSTM language models, are there are any substantial benefits, in terms of number agreement accuracy, to explicitly modeling hierarchical structures as an inductive bias?",
"We discover that a certain class of LSTM language models that explicitly models syntactic structures, the recurrent neural network grammars (Dyer et al., 2016, RN-NGs) , considerably outperforms sequential LSTM language models for cases with multiple attractors ( §3).",
"We present experiments affirming that this gain is due to an explicit composition operator rather than the presence of predicted syntactic annotations.",
"Rather surprisingly, syntactic LSTM language models without explicit composition have no advantage over sequential LSTMs that operate on word sequences, although these models can nevertheless be excellent predictors of phrase structures (Choe and Charniak, 2016) .",
"Having established the importance of modeling structures, we explore the hypothesis that how we build the structure affects the model's ability to identify structural dependencies in English.",
"As RNNGs build phrase-structure trees through top-down operations, we propose extensions to the structure-building sequences and model architecture that enable left-corner (Henderson, 2003 (Henderson, , 2004 and bottom-up (Chelba and Jelinek, 2000; Emami and Jelinek, 2005) generation orders ( §4).",
"Extensive prior work has characterized topdown, left-corner, and bottom-up parsing strategies in terms of cognitive plausibility (Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992) and neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we move away from the realm of parsing and evaluate the three strategies as models of generation instead, and address the following empirical question: which generation order is most appropriately biased to model structural dependencies in English, as indicated by number agreement accuracy?",
"Our key finding is that the top-down generation outperforms left-corner and bottom-up variants for difficult cases with multiple attractors.",
"In theory, the three traversal strategies approximate the same chain rule that decompose the joint probability of words and phrase-structure trees, denoted as p(x, y), differently and as such will impose different biases on the learner.",
"In §4.3, we show that the three variants achieve similar perplexities on a held-out validation set.",
"As we observe different patterns in number agreement, this demonstrates that while perplexity can be a useful diagnostic tool, it may not be sensitive enough for comparing models in terms of how well they capture grammatical intuitions.",
"Number Agreement with LSTM Language Models We revisit the number agreement task with LSTMs trained on language modeling objectives, as proposed by Linzen et al.",
"(2016) .",
"Experimental Settings.",
"We use the same parsed Wikipedia corpus, verb inflectors, preprocessing steps, and dataset split as Linzen et al.",
"(2016) .",
"1 Word types beyond the most frequent 10,000 are converted to their respective POS tags.",
"We summarize the corpus statistics of the dataset, along with the test set distribution of the number of attractors, in Table 1 .",
"Similar to Linzen et al.",
"(2016) , we only include test cases where all intervening nouns are of the opposite number forms than the subject noun.",
"All models are implemented using the DyNet library (Neubig et al., 2017) .",
"Training was done using a language modeling objective that predicts the next word given the prefix; at test time we compute agreement error rates by comparing the probability of the correct verb form with the incorrect one.",
"We report performance of a few different LSTM hidden layer configurations, while other hyper-parameters are selected based on a grid search.",
"2 Following Linzen Table 2 indicates that, given enough capacity, LSTM language models without explicit syntactic supervision are able to perform well in number agreement.",
"For cases with multiple attractors, we observe that the LSTM language model with 50 hidden units trails behind its larger counterparts by a substantial margin despite comparable performance for zero attractor cases, suggesting that network capacity plays an especially important role in propagating relevant structural information across a large number of steps.",
"5 Our experiment independently derives the same finding as the recent work of Gulordava et al.",
"(2018) , who also find that LSTMs trained with language modeling objectives are able to learn number agreement well; here we additionally identify model capacity as one of the reasons for the discrepancy with the Linzen et al.",
"(2016) results.",
"While the pretrained large-scale language model of Jozefowicz et al.",
"(2016) has certain advantages in terms of model capacity, more training data, and richer vocabulary, we suspect that the poorer performance is due to differences between their training domain and the number agreement testing domain, although the model still performs reasonably well in the number agreement test set.",
"Prior work has confirmed the notion that, in many cases, statistical models are able to achieve good performance under some aggregate metric by overfitting to patterns that are predictive in most cases, often at the expense of more difficult, infrequent instances that require deeper language understanding abilities (Rimell et al., 2009; Jia and Liang, 2017) .",
"In the vast majority of cases, structural dependencies between subjects and verbs highly overlap with sequential dependencies (Table 1).",
"Nevertheless, the fact that number agreement accuracy gets worse as the number of attractors increases is consistent with a sequential recency bias in LSTMs: under this conjecture, identifying the correct structural dependency becomes harder when there are more adjacent nouns of different number forms than the true subject.",
"If the sequential recency conjecture is correct, then LSTMs would perform worse when the structural dependency is more distant in the sequences, compared to cases where the structural dependency is more adjacent.",
"We empirically test this conjecture by running a strong character-based LSTM language model of that achieved state of the art results on EnWiki8 from the Hutter Prize dataset (Hutter, 2012) , with 1,800 hidden units and 10 million parameters.",
"The character LSTM is trained, validated, and tested 6 on the same split of the Linzen et al.",
"(2016) number agreement dataset.",
"A priori, we expect that number agreement is harder for character LSTMs for two reasons.",
"First, character LSTMs lack explicit word representaforms much better for cases with multiple attractors.",
"tions, thus succeeding in this task requires identifying structural dependencies between two sequences of character tokens, while word-based LSTMs only need to resolve dependencies between word tokens.",
"Second, by nature of modeling characters, non-local structural dependencies are sequentially further apart than in the wordbased language model.",
"On the other hand, character LSTMs have the ability to exploit and share informative morphological cues, such as the fact that plural nouns in English tend to end with 's'.",
"As demonstrated on the last row of Table 2 , we find that the character LSTM language model performs much worse at number agreement with multiple attractors compared to its word-based counterparts.",
"This finding is consistent with that of Sennrich (2017) , who find that character-level decoders in neural machine translation perform worse than subword models in capturing morphosyntactic agreement.",
"To some extent, our finding demonstrates the limitations that character LSTMs face in learning structure from language modeling objectives, despite earlier evidence that character LSTM language models are able to implicitly acquire a lexicon (Le Godais et al., 2017) .",
"Number Agreement with RNNGs Given the strong performance of sequential LSTMs in number agreement, is there any further benefit to explicitly modeling hierarchical structures?",
"We focus on recurrent neural network grammars (Dyer et al., 2016, RNNGs) , which jointly model the probability of phrase-structure trees and strings, p(x, y), through structurebuilding actions and explicit compositions for representing completed constituents.",
"Our choice of RNNGs is motivated by the findings of Kuncoro et al.",
"(2017) , who find evidence for syntactic headedness in RNNG phrasal representations.",
"Intuitively, the ability to learn heads is beneficial for this task, as the representation for the noun phrase \"The flowers in the vase\" would be similar to the syntactic head flowers rather than vase.",
"In some sense, the composition operator can be understood as injecting a structural recency bias into the model design, as subjects and verbs that are sequentially apart are encouraged to be close together in the RNNGs' representation.",
"Recurrent Neural Network Grammars RNNGs (Dyer et al., 2016) are language models that estimate the joint probability of string terminals and phrase-structure tree nonterminals.",
"Here we use stack-only RNNGs that achieve better perplexity and parsing performance (Kuncoro et al., 2017) .",
"Given the current stack configuration, the objective function of RNNGs is to predict the correct structure-building operation according to a top-down, left-to-right traversal of the phrasestructure tree; a partial traversal for the input sentence \"The flowers in the vase are blooming\" is illustrated in Fig.",
"3(a) .",
"7 The structural inductive bias of RNNGs derives from an explicit composition operator that represents completed constituents; for instance, the constituent (NP The flowers) is represented by a single composite element on the stack, rather than as four separate symbols.",
"During each REDUCE action, the topmost stack elements that belong to the new constituent are popped from the stack and then composed by the composition function; the composed symbol is then pushed back into the stack.",
"The model is trained in an end-to-end manner by minimizing the cross-entropy loss relative to a sample of gold trees.",
"Experiments Here we summarize the experimental settings of running RNNGs on the number agreement dataset and discuss the empirical findings.",
"Experimental settings.",
"We obtain phrasestructure trees for the Linzen et al.",
"(2016) dataset using a publicly available discriminative model 8 trained on the Penn Treebank (Marcus et al., 1993) .",
"At training time, we use these predicted trees to derive action sequences on the training set, and train the RNNG model on these sequences.",
"9 At test time, we compare the probabilities of the correct and incorrect verb forms given the prefix, which now includes both nonterminal and terminal symbols.",
"An example of the stack contents (i.e.",
"the prefix) when predicting the verb is provided in Fig.",
"3(a) .",
"We similarly run a grid search over the same hyper-parameter range as the sequential LSTM and compare the results with the strongest sequential LSTM baseline from §2.",
"Figure 2 : Number agreement error rates for sequential LSTM language models (left), sequential syntactic LSTM language models (Choe and Charniak, 2016, center), and RNNGs (right).",
"Discussion.",
"Fig.",
"2 shows that RNNGs (rightmost) achieve much better number agreement accuracy compared to LSTM language models (leftmost) for difficult cases with four and five attractors, with around 30% error rate reductions, along with a 13% error rate reduction (from 9% to 7.8%) for three attractors.",
"We attribute the slightly worse performance of RNNGs on cases with zero and one attractor to the presence of intervening structure-building actions that separate the subject and the verb, such as a REDUCE (step 6 in Fig.",
"3(a) ) action to complete the noun phrase and at least one action to predict a verb phrase (step 15 in Fig.",
"3(a) ) before the verb itself is introduced, while LSTM language models benefit from shorter dependencies for zero and one attractor cases.",
"The performance gain of RNNGs might arise from two potential causes.",
"First, RNNGs have access to predicted syntactic annotations, while LSTM language models operate solely on word sequences.",
"Second, RNNGs incorporate explicit compositions, which encourage hierarhical representations and potentially the discovery of syntactic (rather than sequential) dependencies.",
"Would LSTMs that have access to syntactic annotations, but without the explicit composition function, benefit from the same performance gain as RNNGs?",
"To answer this question, we run sequential LSTMs over the same phrase-structure trees (Choe and Charniak, 2016) , similarly estimating the joint probability of phrase-structure nonterminals and string terminals but without an explicit composition operator.",
"Taking the example in Fig.",
"3(a) , the sequential syntactic LSTM would have fifteen 10 symbols on the LSTM when predicting the verb, as opposed to three symbols in the case of RNNGs' stack LSTM.",
"In theory, the sequential LSTM over the phrase-structure trees (Choe and Charniak, 2016) may be able to incorporate a similar, albeit implicit, composition process as RNNGs and consequently derive similarly syntactic heads, although there is no inductive bias that explicitly encourages such process.",
"Fig.",
"2 suggests that the sequential syntactic LSTMs (center) perform comparably with sequential LSTMs without syntax for multiple attractor cases, and worse than RNNGs for nearly all attractors; the gap is highest for multiple attractors.",
"This result showcases the importance of an explicit composition operator and hierarchical representations in identifying structural dependencies, as indicated by number agreement accuracy.",
"Our finding is consistent with the recent work of Yogatama et al.",
"(2018) , who find that introducing elements of hierarchical modeling through a stackstructured memory is beneficial for number agreement, outperforming LSTM language models and attention-augmented variants by increasing margins as the number of attractor grows.",
"Further Analysis In order to better interpret the results, we conduct further analysis into the perplexities of each model, followed by a discussion on the effect of incrementality constraints on the RNNG when predicting number agreement.",
"Perplexity.",
"To what extent does the success of RNNGs in the number agreement task with multiple attractors correlate with better performance under the perplexity metric?",
"We answer this question by using an importance sampling marginalization procedure (Dyer et al., 2016) to obtain an estimate of p(x) under both RNNGs and the sequential syntactic LSTM model.",
"Following Dyer et al.",
"(2016) , for each sentence on the validation set we sample 100 candidate trees from a discriminative model 11 as our proposal distribution.",
"As demonstrated in Table 3 , the LSTM language model has the lowest validation set perplexity despite substantially worse performance than RN-NGs in number agreement with multiple attractors, suggesting that lower perplexity is not neces-sarily correlated with number agreement success.",
"Incrementality constraints.",
"As the syntactic prefix was derived from a discriminative model that has access to unprocessed words, one potential concern is that this prefix might violate the incrementality constraints and benefit the RNNG over the LSTM language model.",
"To address this concern, we remark that the empirical evidence from Fig.",
"2 and Table 3 indicates that the LSTM language model without syntactic annotation outperforms the sequential LSTM with syntactic annotation in terms of both perplexity and number agreement throughout nearly all attractor settings, suggesting that the predicted syntactic prefix does not give any unfair advantage to the syntactic models.",
"Furthermore, we run an experiment where the syntactic prefix is instead derived from an incremental beam search procedure of Fried et al.",
"(2017) .",
"12 To this end, we take the highest scoring beam entry at the time that the verb is generated to be the syntactic prefix; this procedure is applied to both the correct and incorrect verb forms.",
"13 We then similarly compare the probabilities of the correct and incorrect verb form given each respective syntactic prefix to obtain number agreement accuracy.",
"Our finding suggests that using the fully incremental tree prefix leads to even better RNNG number agreement performance for four and five attractors, achieving 7.1% and 8.2% error rates, respectively, compared to 9.4% and 12% for the RNNG error rates in Fig.",
"2 .",
"Top-Down, Left-Corner, and Bottom-Up Traversals In this section, we propose two new variants of RNNGs that construct trees using a different con- 12 As the beam search procedure is time-consuming, we randomly sample 500 cases for each attractor and compute the number agreement accuracy on these samples.",
"13 Consequently, the correct and incorrect forms of the sentence might have different partial trees, as the highest scoring beam entries may be different for each alternative.",
"struction order than the top-down, left-to-right order used above.",
"These are a bottom-up construction order ( §4.1) and a left-corner construction order ( §4.2), analogous to the well-known parsing strategies (e.g.",
"Hale, 2014, chapter 3).",
"They differ from these classic strategies insofar as they do not announce the phrase-structural content of an entire branch at the same time, adopting instead a node-by-node enumeration reminescent of Markov Grammars (Charniak, 1997) .",
"This stepby-step arrangement extends to the derived string as well; since all variants generate words from left to right, the models can be compared using number agreement as a diagnostic.",
"14 Here we state our hypothesis on why the build order matters.",
"The three generation strategies represent different chain rule decompositions of the joint probability of strings and phrase-structure trees, thereby imposing different biases on the learner.",
"Earlier work in parsing has characterized the plausibility of top-down, left-corner, and bottom-up strategies as viable candidates of human sentence processing, especially in terms of memory constraints and human difficulties with center embedding constructions (Johnson-Laird, 1983; Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992 , inter alia), along with neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we cast the three strategies as models of language generation (Manning and Carpenter, 1997), and focus on the empirical question: which generation order has the most appropriate bias in modeling non-local structural dependencies in English?",
"These alternative orders organize the learning problem so as to yield intermediate states in generation that condition on different aspects of the grammatical structure.",
"In number agreement, this amounts to making an agreement controller, such as the word flowers in Fig.",
"3 , more or less salient.",
"If it is more salient, the model should be better-able to inflect the main verb in agreement with this controller, without getting distracted by the attractors.",
"The three proposed build orders are compared in Fig.",
"3 , showing the respective configurations (i.e.",
"the prefix) when generating the main verb in a sentence with a single attractor.",
"15 In ad-dition, we show concrete action sequences for a simpler sentence in each section.",
"Bottom-Up Traversal In bottom-up traversals, phrases are recursively constructed and labeled with the nonterminal type once all their daughters have been built, as illustrated in Fig.",
"4 .",
"Bottom-up traversals benefit from shorter stack depths compared to top-down due to the lack of incomplete nonterminals.",
"As the commitment to label the nonterminal type of a phrase is delayed until its constituents are complete, this means that the generation of a child node cannot condition on the label of its parent node.",
"In n-ary branching trees, bottom-up completion of constituents requires a procedure for determining how many of the most recent elements on the stack should be daughters of the node that is being constructed.",
"16 Conceptually, rather than having a single REDUCE operation as we have before, we have a complex REDUCE(X, n) operation that must determine the type of the constituent (i.e., X) as well as the number of daughters (i.e., n).",
"In step 5 of Fig.",
"4 , the newly formed NP constituent only covers the terminal worms, and neither the unattached terminal eats nor the constituent (NP The fox) is part of the new noun phrase.",
"We implement this extent decision using a stick-breaking construction-using the stack LSTM encoding, a single-layer feedforward network, and a logistic output layer-which decides whether the top element on the stack should be the leftmost child of the new constituent (i.e.",
"whether or not the new constituent is complete after popping the current topmost stack element), as illustrated in Fig.",
"5 .",
"If not, the process is then repeated after the topmost stack element is popped.",
"Once the extent of the new nonterminal has been decided, we parameterize the decision of the nonterminal label type; in Fig.",
"5 this is an NP.",
"A second difference to top-down generation is that when a single constituent remains on the stack, the sentence is not necessarily complete (see step 3 of Fig.",
"4 for examples where this happens).",
"We thus introduce an explicit STOP action (step 8, Fig.",
"4) , indicating the tree is complete, which is only assigned non-zero probability when the stack has a during the history of the full generation process vary considerably in the invariances and the kinds of actions they predict.",
"16 This mechanism is not necessary with strictly binary branching trees, since each new nonterminal always consists of the two children at the top of the stack.",
"Left-Corner Traversal Left-corner traversals combine some aspects of top-down and bottom-up processing.",
"As illustrated in Fig.",
"6 , this works by first generating the leftmost terminal of the tree, The (step 0), before proceeding bottom-up to predict its parent NP (step 1) and then top-down to predict the rest of its children (step 2).",
"A REDUCE action similarly calls the composition operator once the phrase is complete (e.g.",
"step 3).",
"The complete constituent (NP The fox) is the leftmost child of its parent node, thus an NT SW(S) action is done next (step 4).",
"The NT SW(X) action is similar to the NT(X) from the top-down generator, in that it introduces an open nonterminal node and must be matched later by a corresponding REDUCE operation, but, in addition, swaps the two topmost elements at the top of the stack.",
"This is necessary because the parent nonterminal node is not built until after its left-most child has been constructed.",
"In step 1 of Fig.",
"6 , with a single element The on the stack, the action NT SW(NP) adds the open nonterminal symbol NP to become the topmost stack element, but after applying the swap operator the stack now contains (NP | The (step 2).",
"Experiments We optimize the hyper-parameters of each RNNG variant using grid searches based on validation set perplexity.",
"Table 4 summarizes average stack depths and perplexities 17 on the Linzen et al.",
"(2016) validation set.",
"We evaluate each of the variants in terms of number agreement accuracy as an evidence of its suitability to model structural dependencies in English, presented in Table 5 .",
"To account for randomness in training, we report the error rate summary statistics of ten different runs.",
"Avg.",
"(±sdev)/min/max n=2 n=3 n=4 LM 5.8(±0.2)/5.5/6.0 9.6(±0.7)/8.8/10.1 14.1(±1.2)/13.0/15.3 TD 5.5(±0.4)/4.9/5.8 7.8(±0.6)/7.4/8.0 8.9(±1.1)/7.9/9.8 LC 5.4(±0.3)/5.2/5.5 8.2(±0.4)/7.9/8.7 9.9(±1.3)/8.8/11.5 BU 5.7(±0.3) 5.5/5.8 8.5(±0.7)/8.0/9.3 9.7(±1.1)/9.0/11.3 Table 5 : Number agreement error rates for topdown (TD), left-corner (LC), and bottom-up (BU) RNNGs, broken down by the number of attractors.",
"LM indicates the best sequential language model baseline ( §2).",
"We report the mean, standard deviation, and minimum/maximum of 10 different random seeds of each model.",
"Discussion.",
"In Table 5 , we focus on empirical results for cases where the structural dependencies matter the most, corresponding to cases with two, three, and four attractors.",
"All three RNNG variants outperform the sequential LSTM language model baseline for these cases.",
"Nevertheless, the top-down variant outperforms both left-corner and bottom-up strategies for difficult cases with three or more attractors, suggesting that the top-down strategy is most appropriately biased to model difficult number agreement dependencies in English.",
"We run an approximate randomization test by stratifying the output and permuting within each stratum (Yeh, 2000) and find that, for four attractors, the performance difference between the top-down RNNG and the other variants is statistically significant at p < 0.05.",
"The success of the top-down traversal in the domain of number-agreement prediction is consistent with a classical view in parsing that argues top-down parsing is the most human-like parsing strategy since it is the most anticipatory.",
"Only anticipatory representations, it is said, could explain the rapid, incremental processing that humans seem to exhibit (Marslen-Wilson, 1973; Tanenhaus et al., 1995) ; this line of thinking similarly motivates Charniak (2010) , among others.",
"While most work in this domain has been concerned with the parsing problem, our findings suggest that anticipatory mechanisms are also beneficial in capturing structural dependencies in language modeling.",
"We note that our results are achieved using models that, in theory, are able to condition on the entire derivation history, while earlier work in sentence processing has focused on cognitive memory considerations, such as the memory-bounded model of Schuler et al.",
"(2010) .",
"Conclusion Given enough capacity, LSTMs trained on language modeling objectives are able to learn syntax-sensitive dependencies, as evidenced by accurate number agreement accuracy with multiple attractors.",
"Despite this strong performance, we discover explicit modeling of structure does improve the model's ability to discover non-local structural dependencies when determining the distribution over subsequent word generation.",
"Recurrent neural network grammars (RNNGs), which jointly model phrase-structure trees and strings and employ an explicit composition operator, substantially outperform LSTM language models and syntactic language models without explicit compositions; this highlights the importance of a hierarchical inductive bias in capturing structural dependencies.",
"We explore the possibility that how the structure is built affects number agreement performance.",
"Through novel extensions to RNNGs that enable the use of left-corner and bottom-up generation strategies, we discover that this is indeed the case: the three RNNG variants have different generalization properties for number agreement, with the top-down traversal strategy performing best for cases with multiple attractors."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"4.3",
"5"
],
"paper_header_content": [
"Introduction",
"Number Agreement with LSTM Language Models",
"Number Agreement with RNNGs",
"Recurrent Neural Network Grammars",
"Experiments",
"Further Analysis",
"Top-Down, Left-Corner, and Bottom-Up Traversals",
"Bottom-Up Traversal",
"Left-Corner Traversal",
"Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-30#paper-1041#slide-22 | Second Part Quick Recap | RNNGs considerably outperform LSTM language model and sequential syntactic LSTM for number agreement with multiple attractors.
Syntactic annotation alone has little impact on number agreement accuracy.
RNNGs success is due to the hierarchical inductive bias.
The RNNGs performance is a new state of the art on this dataset
Perplexity is only loosely correlated with number agreement.
Independently confirm the finding of Tran et al. (2018).
LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | RNNGs considerably outperform LSTM language model and sequential syntactic LSTM for number agreement with multiple attractors.
Syntactic annotation alone has little impact on number agreement accuracy.
RNNGs success is due to the hierarchical inductive bias.
The RNNGs performance is a new state of the art on this dataset
Perplexity is only loosely correlated with number agreement.
Independently confirm the finding of Tran et al. (2018).
LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | [] |
GEM-SciDuet-train-30#paper-1041#slide-23 | 1041 | LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modeling Structure Makes Them Better | Language exhibits hierarchical structure, but recent work using a subject-verb agreement diagnostic argued that state-ofthe-art language models, LSTMs, fail to learn long-range syntax-sensitive dependencies. Using the same diagnostic, we show that, in fact, LSTMs do succeed in learning such dependencies-provided they have enough capacity. We then explore whether models that have access to explicit syntactic information learn agreement more effectively, and how the way in which this structural information is incorporated into the model impacts performance. We find that the mere presence of syntactic information does not improve accuracy, but when model architecture is determined by syntax, number agreement is improved. Further, we find that the choice of how syntactic structure is built affects how well number agreement is learned: top-down construction outperforms leftcorner and bottom-up variants in capturing long-distance structural dependencies. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189
],
"paper_content_text": [
"Introduction Recurrent neural networks (RNNs) are remarkably effective models of sequential data.",
"Recent years have witnessed the widespread adoption of recurrent architectures such as LSTMs (Hochreiter and Schmidhuber, 1997) in various NLP tasks, with state of the art results in language modeling and conditional generation tasks like machine translation (Bahdanau et al., 2015) and text summarization (See et al., 2017) .",
"Here we revisit the question asked by Linzen et al.",
"(2016) : as RNNs model word sequences without explicit notions of hierarchical structure, Figure 1 : An example of the number agreement task with two attractors and a subject-verb distance of five.",
"to what extent are these models able to learn non-local syntactic dependencies in natural language?",
"Identifying number agreement between subjects and verbs-especially in the presence of attractors-can be understood as a cognitivelymotivated probe that seeks to distinguish hierarchical theories from sequential ones, as models that rely on sequential cues like the most recent noun would favor the incorrect verb form.",
"We provide an example of this task in Fig.",
"1 , where the plural form of the verb have agrees with the distant subject parts, rather than the adjacent attractors (underlined) of the singular form.",
"Contrary to the findings of Linzen et al.",
"(2016) , our experiments suggest that sequential LSTMs are able to capture structural dependencies to a large extent, even for cases with multiple attractors ( §2).",
"Our finding suggests that network capacity plays a crucial role in capturing structural dependencies with multiple attractors.",
"Nevertheless, we find that a strong character LSTM language model-which lacks explicit word representation and has to capture much longer sequential dependencies in order to learn non-local structural dependencies effectively-performs much worse in the number agreement task.",
"Given the strong performance of word-based LSTM language models, are there are any substantial benefits, in terms of number agreement accuracy, to explicitly modeling hierarchical structures as an inductive bias?",
"We discover that a certain class of LSTM language models that explicitly models syntactic structures, the recurrent neural network grammars (Dyer et al., 2016, RN-NGs) , considerably outperforms sequential LSTM language models for cases with multiple attractors ( §3).",
"We present experiments affirming that this gain is due to an explicit composition operator rather than the presence of predicted syntactic annotations.",
"Rather surprisingly, syntactic LSTM language models without explicit composition have no advantage over sequential LSTMs that operate on word sequences, although these models can nevertheless be excellent predictors of phrase structures (Choe and Charniak, 2016) .",
"Having established the importance of modeling structures, we explore the hypothesis that how we build the structure affects the model's ability to identify structural dependencies in English.",
"As RNNGs build phrase-structure trees through top-down operations, we propose extensions to the structure-building sequences and model architecture that enable left-corner (Henderson, 2003 (Henderson, , 2004 and bottom-up (Chelba and Jelinek, 2000; Emami and Jelinek, 2005) generation orders ( §4).",
"Extensive prior work has characterized topdown, left-corner, and bottom-up parsing strategies in terms of cognitive plausibility (Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992) and neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we move away from the realm of parsing and evaluate the three strategies as models of generation instead, and address the following empirical question: which generation order is most appropriately biased to model structural dependencies in English, as indicated by number agreement accuracy?",
"Our key finding is that the top-down generation outperforms left-corner and bottom-up variants for difficult cases with multiple attractors.",
"In theory, the three traversal strategies approximate the same chain rule that decompose the joint probability of words and phrase-structure trees, denoted as p(x, y), differently and as such will impose different biases on the learner.",
"In §4.3, we show that the three variants achieve similar perplexities on a held-out validation set.",
"As we observe different patterns in number agreement, this demonstrates that while perplexity can be a useful diagnostic tool, it may not be sensitive enough for comparing models in terms of how well they capture grammatical intuitions.",
"Number Agreement with LSTM Language Models We revisit the number agreement task with LSTMs trained on language modeling objectives, as proposed by Linzen et al.",
"(2016) .",
"Experimental Settings.",
"We use the same parsed Wikipedia corpus, verb inflectors, preprocessing steps, and dataset split as Linzen et al.",
"(2016) .",
"1 Word types beyond the most frequent 10,000 are converted to their respective POS tags.",
"We summarize the corpus statistics of the dataset, along with the test set distribution of the number of attractors, in Table 1 .",
"Similar to Linzen et al.",
"(2016) , we only include test cases where all intervening nouns are of the opposite number forms than the subject noun.",
"All models are implemented using the DyNet library (Neubig et al., 2017) .",
"Training was done using a language modeling objective that predicts the next word given the prefix; at test time we compute agreement error rates by comparing the probability of the correct verb form with the incorrect one.",
"We report performance of a few different LSTM hidden layer configurations, while other hyper-parameters are selected based on a grid search.",
"2 Following Linzen Table 2 indicates that, given enough capacity, LSTM language models without explicit syntactic supervision are able to perform well in number agreement.",
"For cases with multiple attractors, we observe that the LSTM language model with 50 hidden units trails behind its larger counterparts by a substantial margin despite comparable performance for zero attractor cases, suggesting that network capacity plays an especially important role in propagating relevant structural information across a large number of steps.",
"5 Our experiment independently derives the same finding as the recent work of Gulordava et al.",
"(2018) , who also find that LSTMs trained with language modeling objectives are able to learn number agreement well; here we additionally identify model capacity as one of the reasons for the discrepancy with the Linzen et al.",
"(2016) results.",
"While the pretrained large-scale language model of Jozefowicz et al.",
"(2016) has certain advantages in terms of model capacity, more training data, and richer vocabulary, we suspect that the poorer performance is due to differences between their training domain and the number agreement testing domain, although the model still performs reasonably well in the number agreement test set.",
"Prior work has confirmed the notion that, in many cases, statistical models are able to achieve good performance under some aggregate metric by overfitting to patterns that are predictive in most cases, often at the expense of more difficult, infrequent instances that require deeper language understanding abilities (Rimell et al., 2009; Jia and Liang, 2017) .",
"In the vast majority of cases, structural dependencies between subjects and verbs highly overlap with sequential dependencies (Table 1).",
"Nevertheless, the fact that number agreement accuracy gets worse as the number of attractors increases is consistent with a sequential recency bias in LSTMs: under this conjecture, identifying the correct structural dependency becomes harder when there are more adjacent nouns of different number forms than the true subject.",
"If the sequential recency conjecture is correct, then LSTMs would perform worse when the structural dependency is more distant in the sequences, compared to cases where the structural dependency is more adjacent.",
"We empirically test this conjecture by running a strong character-based LSTM language model of that achieved state of the art results on EnWiki8 from the Hutter Prize dataset (Hutter, 2012) , with 1,800 hidden units and 10 million parameters.",
"The character LSTM is trained, validated, and tested 6 on the same split of the Linzen et al.",
"(2016) number agreement dataset.",
"A priori, we expect that number agreement is harder for character LSTMs for two reasons.",
"First, character LSTMs lack explicit word representaforms much better for cases with multiple attractors.",
"tions, thus succeeding in this task requires identifying structural dependencies between two sequences of character tokens, while word-based LSTMs only need to resolve dependencies between word tokens.",
"Second, by nature of modeling characters, non-local structural dependencies are sequentially further apart than in the wordbased language model.",
"On the other hand, character LSTMs have the ability to exploit and share informative morphological cues, such as the fact that plural nouns in English tend to end with 's'.",
"As demonstrated on the last row of Table 2 , we find that the character LSTM language model performs much worse at number agreement with multiple attractors compared to its word-based counterparts.",
"This finding is consistent with that of Sennrich (2017) , who find that character-level decoders in neural machine translation perform worse than subword models in capturing morphosyntactic agreement.",
"To some extent, our finding demonstrates the limitations that character LSTMs face in learning structure from language modeling objectives, despite earlier evidence that character LSTM language models are able to implicitly acquire a lexicon (Le Godais et al., 2017) .",
"Number Agreement with RNNGs Given the strong performance of sequential LSTMs in number agreement, is there any further benefit to explicitly modeling hierarchical structures?",
"We focus on recurrent neural network grammars (Dyer et al., 2016, RNNGs) , which jointly model the probability of phrase-structure trees and strings, p(x, y), through structurebuilding actions and explicit compositions for representing completed constituents.",
"Our choice of RNNGs is motivated by the findings of Kuncoro et al.",
"(2017) , who find evidence for syntactic headedness in RNNG phrasal representations.",
"Intuitively, the ability to learn heads is beneficial for this task, as the representation for the noun phrase \"The flowers in the vase\" would be similar to the syntactic head flowers rather than vase.",
"In some sense, the composition operator can be understood as injecting a structural recency bias into the model design, as subjects and verbs that are sequentially apart are encouraged to be close together in the RNNGs' representation.",
"Recurrent Neural Network Grammars RNNGs (Dyer et al., 2016) are language models that estimate the joint probability of string terminals and phrase-structure tree nonterminals.",
"Here we use stack-only RNNGs that achieve better perplexity and parsing performance (Kuncoro et al., 2017) .",
"Given the current stack configuration, the objective function of RNNGs is to predict the correct structure-building operation according to a top-down, left-to-right traversal of the phrasestructure tree; a partial traversal for the input sentence \"The flowers in the vase are blooming\" is illustrated in Fig.",
"3(a) .",
"7 The structural inductive bias of RNNGs derives from an explicit composition operator that represents completed constituents; for instance, the constituent (NP The flowers) is represented by a single composite element on the stack, rather than as four separate symbols.",
"During each REDUCE action, the topmost stack elements that belong to the new constituent are popped from the stack and then composed by the composition function; the composed symbol is then pushed back into the stack.",
"The model is trained in an end-to-end manner by minimizing the cross-entropy loss relative to a sample of gold trees.",
"Experiments Here we summarize the experimental settings of running RNNGs on the number agreement dataset and discuss the empirical findings.",
"Experimental settings.",
"We obtain phrasestructure trees for the Linzen et al.",
"(2016) dataset using a publicly available discriminative model 8 trained on the Penn Treebank (Marcus et al., 1993) .",
"At training time, we use these predicted trees to derive action sequences on the training set, and train the RNNG model on these sequences.",
"9 At test time, we compare the probabilities of the correct and incorrect verb forms given the prefix, which now includes both nonterminal and terminal symbols.",
"An example of the stack contents (i.e.",
"the prefix) when predicting the verb is provided in Fig.",
"3(a) .",
"We similarly run a grid search over the same hyper-parameter range as the sequential LSTM and compare the results with the strongest sequential LSTM baseline from §2.",
"Figure 2 : Number agreement error rates for sequential LSTM language models (left), sequential syntactic LSTM language models (Choe and Charniak, 2016, center), and RNNGs (right).",
"Discussion.",
"Fig.",
"2 shows that RNNGs (rightmost) achieve much better number agreement accuracy compared to LSTM language models (leftmost) for difficult cases with four and five attractors, with around 30% error rate reductions, along with a 13% error rate reduction (from 9% to 7.8%) for three attractors.",
"We attribute the slightly worse performance of RNNGs on cases with zero and one attractor to the presence of intervening structure-building actions that separate the subject and the verb, such as a REDUCE (step 6 in Fig.",
"3(a) ) action to complete the noun phrase and at least one action to predict a verb phrase (step 15 in Fig.",
"3(a) ) before the verb itself is introduced, while LSTM language models benefit from shorter dependencies for zero and one attractor cases.",
"The performance gain of RNNGs might arise from two potential causes.",
"First, RNNGs have access to predicted syntactic annotations, while LSTM language models operate solely on word sequences.",
"Second, RNNGs incorporate explicit compositions, which encourage hierarhical representations and potentially the discovery of syntactic (rather than sequential) dependencies.",
"Would LSTMs that have access to syntactic annotations, but without the explicit composition function, benefit from the same performance gain as RNNGs?",
"To answer this question, we run sequential LSTMs over the same phrase-structure trees (Choe and Charniak, 2016) , similarly estimating the joint probability of phrase-structure nonterminals and string terminals but without an explicit composition operator.",
"Taking the example in Fig.",
"3(a) , the sequential syntactic LSTM would have fifteen 10 symbols on the LSTM when predicting the verb, as opposed to three symbols in the case of RNNGs' stack LSTM.",
"In theory, the sequential LSTM over the phrase-structure trees (Choe and Charniak, 2016) may be able to incorporate a similar, albeit implicit, composition process as RNNGs and consequently derive similarly syntactic heads, although there is no inductive bias that explicitly encourages such process.",
"Fig.",
"2 suggests that the sequential syntactic LSTMs (center) perform comparably with sequential LSTMs without syntax for multiple attractor cases, and worse than RNNGs for nearly all attractors; the gap is highest for multiple attractors.",
"This result showcases the importance of an explicit composition operator and hierarchical representations in identifying structural dependencies, as indicated by number agreement accuracy.",
"Our finding is consistent with the recent work of Yogatama et al.",
"(2018) , who find that introducing elements of hierarchical modeling through a stackstructured memory is beneficial for number agreement, outperforming LSTM language models and attention-augmented variants by increasing margins as the number of attractor grows.",
"Further Analysis In order to better interpret the results, we conduct further analysis into the perplexities of each model, followed by a discussion on the effect of incrementality constraints on the RNNG when predicting number agreement.",
"Perplexity.",
"To what extent does the success of RNNGs in the number agreement task with multiple attractors correlate with better performance under the perplexity metric?",
"We answer this question by using an importance sampling marginalization procedure (Dyer et al., 2016) to obtain an estimate of p(x) under both RNNGs and the sequential syntactic LSTM model.",
"Following Dyer et al.",
"(2016) , for each sentence on the validation set we sample 100 candidate trees from a discriminative model 11 as our proposal distribution.",
"As demonstrated in Table 3 , the LSTM language model has the lowest validation set perplexity despite substantially worse performance than RN-NGs in number agreement with multiple attractors, suggesting that lower perplexity is not neces-sarily correlated with number agreement success.",
"Incrementality constraints.",
"As the syntactic prefix was derived from a discriminative model that has access to unprocessed words, one potential concern is that this prefix might violate the incrementality constraints and benefit the RNNG over the LSTM language model.",
"To address this concern, we remark that the empirical evidence from Fig.",
"2 and Table 3 indicates that the LSTM language model without syntactic annotation outperforms the sequential LSTM with syntactic annotation in terms of both perplexity and number agreement throughout nearly all attractor settings, suggesting that the predicted syntactic prefix does not give any unfair advantage to the syntactic models.",
"Furthermore, we run an experiment where the syntactic prefix is instead derived from an incremental beam search procedure of Fried et al.",
"(2017) .",
"12 To this end, we take the highest scoring beam entry at the time that the verb is generated to be the syntactic prefix; this procedure is applied to both the correct and incorrect verb forms.",
"13 We then similarly compare the probabilities of the correct and incorrect verb form given each respective syntactic prefix to obtain number agreement accuracy.",
"Our finding suggests that using the fully incremental tree prefix leads to even better RNNG number agreement performance for four and five attractors, achieving 7.1% and 8.2% error rates, respectively, compared to 9.4% and 12% for the RNNG error rates in Fig.",
"2 .",
"Top-Down, Left-Corner, and Bottom-Up Traversals In this section, we propose two new variants of RNNGs that construct trees using a different con- 12 As the beam search procedure is time-consuming, we randomly sample 500 cases for each attractor and compute the number agreement accuracy on these samples.",
"13 Consequently, the correct and incorrect forms of the sentence might have different partial trees, as the highest scoring beam entries may be different for each alternative.",
"struction order than the top-down, left-to-right order used above.",
"These are a bottom-up construction order ( §4.1) and a left-corner construction order ( §4.2), analogous to the well-known parsing strategies (e.g.",
"Hale, 2014, chapter 3).",
"They differ from these classic strategies insofar as they do not announce the phrase-structural content of an entire branch at the same time, adopting instead a node-by-node enumeration reminescent of Markov Grammars (Charniak, 1997) .",
"This stepby-step arrangement extends to the derived string as well; since all variants generate words from left to right, the models can be compared using number agreement as a diagnostic.",
"14 Here we state our hypothesis on why the build order matters.",
"The three generation strategies represent different chain rule decompositions of the joint probability of strings and phrase-structure trees, thereby imposing different biases on the learner.",
"Earlier work in parsing has characterized the plausibility of top-down, left-corner, and bottom-up strategies as viable candidates of human sentence processing, especially in terms of memory constraints and human difficulties with center embedding constructions (Johnson-Laird, 1983; Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992 , inter alia), along with neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we cast the three strategies as models of language generation (Manning and Carpenter, 1997), and focus on the empirical question: which generation order has the most appropriate bias in modeling non-local structural dependencies in English?",
"These alternative orders organize the learning problem so as to yield intermediate states in generation that condition on different aspects of the grammatical structure.",
"In number agreement, this amounts to making an agreement controller, such as the word flowers in Fig.",
"3 , more or less salient.",
"If it is more salient, the model should be better-able to inflect the main verb in agreement with this controller, without getting distracted by the attractors.",
"The three proposed build orders are compared in Fig.",
"3 , showing the respective configurations (i.e.",
"the prefix) when generating the main verb in a sentence with a single attractor.",
"15 In ad-dition, we show concrete action sequences for a simpler sentence in each section.",
"Bottom-Up Traversal In bottom-up traversals, phrases are recursively constructed and labeled with the nonterminal type once all their daughters have been built, as illustrated in Fig.",
"4 .",
"Bottom-up traversals benefit from shorter stack depths compared to top-down due to the lack of incomplete nonterminals.",
"As the commitment to label the nonterminal type of a phrase is delayed until its constituents are complete, this means that the generation of a child node cannot condition on the label of its parent node.",
"In n-ary branching trees, bottom-up completion of constituents requires a procedure for determining how many of the most recent elements on the stack should be daughters of the node that is being constructed.",
"16 Conceptually, rather than having a single REDUCE operation as we have before, we have a complex REDUCE(X, n) operation that must determine the type of the constituent (i.e., X) as well as the number of daughters (i.e., n).",
"In step 5 of Fig.",
"4 , the newly formed NP constituent only covers the terminal worms, and neither the unattached terminal eats nor the constituent (NP The fox) is part of the new noun phrase.",
"We implement this extent decision using a stick-breaking construction-using the stack LSTM encoding, a single-layer feedforward network, and a logistic output layer-which decides whether the top element on the stack should be the leftmost child of the new constituent (i.e.",
"whether or not the new constituent is complete after popping the current topmost stack element), as illustrated in Fig.",
"5 .",
"If not, the process is then repeated after the topmost stack element is popped.",
"Once the extent of the new nonterminal has been decided, we parameterize the decision of the nonterminal label type; in Fig.",
"5 this is an NP.",
"A second difference to top-down generation is that when a single constituent remains on the stack, the sentence is not necessarily complete (see step 3 of Fig.",
"4 for examples where this happens).",
"We thus introduce an explicit STOP action (step 8, Fig.",
"4) , indicating the tree is complete, which is only assigned non-zero probability when the stack has a during the history of the full generation process vary considerably in the invariances and the kinds of actions they predict.",
"16 This mechanism is not necessary with strictly binary branching trees, since each new nonterminal always consists of the two children at the top of the stack.",
"Left-Corner Traversal Left-corner traversals combine some aspects of top-down and bottom-up processing.",
"As illustrated in Fig.",
"6 , this works by first generating the leftmost terminal of the tree, The (step 0), before proceeding bottom-up to predict its parent NP (step 1) and then top-down to predict the rest of its children (step 2).",
"A REDUCE action similarly calls the composition operator once the phrase is complete (e.g.",
"step 3).",
"The complete constituent (NP The fox) is the leftmost child of its parent node, thus an NT SW(S) action is done next (step 4).",
"The NT SW(X) action is similar to the NT(X) from the top-down generator, in that it introduces an open nonterminal node and must be matched later by a corresponding REDUCE operation, but, in addition, swaps the two topmost elements at the top of the stack.",
"This is necessary because the parent nonterminal node is not built until after its left-most child has been constructed.",
"In step 1 of Fig.",
"6 , with a single element The on the stack, the action NT SW(NP) adds the open nonterminal symbol NP to become the topmost stack element, but after applying the swap operator the stack now contains (NP | The (step 2).",
"Experiments We optimize the hyper-parameters of each RNNG variant using grid searches based on validation set perplexity.",
"Table 4 summarizes average stack depths and perplexities 17 on the Linzen et al.",
"(2016) validation set.",
"We evaluate each of the variants in terms of number agreement accuracy as an evidence of its suitability to model structural dependencies in English, presented in Table 5 .",
"To account for randomness in training, we report the error rate summary statistics of ten different runs.",
"Avg.",
"(±sdev)/min/max n=2 n=3 n=4 LM 5.8(±0.2)/5.5/6.0 9.6(±0.7)/8.8/10.1 14.1(±1.2)/13.0/15.3 TD 5.5(±0.4)/4.9/5.8 7.8(±0.6)/7.4/8.0 8.9(±1.1)/7.9/9.8 LC 5.4(±0.3)/5.2/5.5 8.2(±0.4)/7.9/8.7 9.9(±1.3)/8.8/11.5 BU 5.7(±0.3) 5.5/5.8 8.5(±0.7)/8.0/9.3 9.7(±1.1)/9.0/11.3 Table 5 : Number agreement error rates for topdown (TD), left-corner (LC), and bottom-up (BU) RNNGs, broken down by the number of attractors.",
"LM indicates the best sequential language model baseline ( §2).",
"We report the mean, standard deviation, and minimum/maximum of 10 different random seeds of each model.",
"Discussion.",
"In Table 5 , we focus on empirical results for cases where the structural dependencies matter the most, corresponding to cases with two, three, and four attractors.",
"All three RNNG variants outperform the sequential LSTM language model baseline for these cases.",
"Nevertheless, the top-down variant outperforms both left-corner and bottom-up strategies for difficult cases with three or more attractors, suggesting that the top-down strategy is most appropriately biased to model difficult number agreement dependencies in English.",
"We run an approximate randomization test by stratifying the output and permuting within each stratum (Yeh, 2000) and find that, for four attractors, the performance difference between the top-down RNNG and the other variants is statistically significant at p < 0.05.",
"The success of the top-down traversal in the domain of number-agreement prediction is consistent with a classical view in parsing that argues top-down parsing is the most human-like parsing strategy since it is the most anticipatory.",
"Only anticipatory representations, it is said, could explain the rapid, incremental processing that humans seem to exhibit (Marslen-Wilson, 1973; Tanenhaus et al., 1995) ; this line of thinking similarly motivates Charniak (2010) , among others.",
"While most work in this domain has been concerned with the parsing problem, our findings suggest that anticipatory mechanisms are also beneficial in capturing structural dependencies in language modeling.",
"We note that our results are achieved using models that, in theory, are able to condition on the entire derivation history, while earlier work in sentence processing has focused on cognitive memory considerations, such as the memory-bounded model of Schuler et al.",
"(2010) .",
"Conclusion Given enough capacity, LSTMs trained on language modeling objectives are able to learn syntax-sensitive dependencies, as evidenced by accurate number agreement accuracy with multiple attractors.",
"Despite this strong performance, we discover explicit modeling of structure does improve the model's ability to discover non-local structural dependencies when determining the distribution over subsequent word generation.",
"Recurrent neural network grammars (RNNGs), which jointly model phrase-structure trees and strings and employ an explicit composition operator, substantially outperform LSTM language models and syntactic language models without explicit compositions; this highlights the importance of a hierarchical inductive bias in capturing structural dependencies.",
"We explore the possibility that how the structure is built affects number agreement performance.",
"Through novel extensions to RNNGs that enable the use of left-corner and bottom-up generation strategies, we discover that this is indeed the case: the three RNNG variants have different generalization properties for number agreement, with the top-down traversal strategy performing best for cases with multiple attractors."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"4.3",
"5"
],
"paper_header_content": [
"Introduction",
"Number Agreement with LSTM Language Models",
"Number Agreement with RNNGs",
"Recurrent Neural Network Grammars",
"Experiments",
"Further Analysis",
"Top-Down, Left-Corner, and Bottom-Up Traversals",
"Bottom-Up Traversal",
"Left-Corner Traversal",
"Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-30#paper-1041#slide-23 | Different Tree Traversals | RNNGs operate according to a top-down, left-to-right traversal
Here we propose two alternative tree construction orders for RNNGs: left-corner and bottom-up traversals.
x: the flowers in the vase are/is [blooming]
LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | RNNGs operate according to a top-down, left-to-right traversal
Here we propose two alternative tree construction orders for RNNGs: left-corner and bottom-up traversals.
x: the flowers in the vase are/is [blooming]
LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | [] |
GEM-SciDuet-train-30#paper-1041#slide-24 | 1041 | LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modeling Structure Makes Them Better | Language exhibits hierarchical structure, but recent work using a subject-verb agreement diagnostic argued that state-ofthe-art language models, LSTMs, fail to learn long-range syntax-sensitive dependencies. Using the same diagnostic, we show that, in fact, LSTMs do succeed in learning such dependencies-provided they have enough capacity. We then explore whether models that have access to explicit syntactic information learn agreement more effectively, and how the way in which this structural information is incorporated into the model impacts performance. We find that the mere presence of syntactic information does not improve accuracy, but when model architecture is determined by syntax, number agreement is improved. Further, we find that the choice of how syntactic structure is built affects how well number agreement is learned: top-down construction outperforms leftcorner and bottom-up variants in capturing long-distance structural dependencies. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189
],
"paper_content_text": [
"Introduction Recurrent neural networks (RNNs) are remarkably effective models of sequential data.",
"Recent years have witnessed the widespread adoption of recurrent architectures such as LSTMs (Hochreiter and Schmidhuber, 1997) in various NLP tasks, with state of the art results in language modeling and conditional generation tasks like machine translation (Bahdanau et al., 2015) and text summarization (See et al., 2017) .",
"Here we revisit the question asked by Linzen et al.",
"(2016) : as RNNs model word sequences without explicit notions of hierarchical structure, Figure 1 : An example of the number agreement task with two attractors and a subject-verb distance of five.",
"to what extent are these models able to learn non-local syntactic dependencies in natural language?",
"Identifying number agreement between subjects and verbs-especially in the presence of attractors-can be understood as a cognitivelymotivated probe that seeks to distinguish hierarchical theories from sequential ones, as models that rely on sequential cues like the most recent noun would favor the incorrect verb form.",
"We provide an example of this task in Fig.",
"1 , where the plural form of the verb have agrees with the distant subject parts, rather than the adjacent attractors (underlined) of the singular form.",
"Contrary to the findings of Linzen et al.",
"(2016) , our experiments suggest that sequential LSTMs are able to capture structural dependencies to a large extent, even for cases with multiple attractors ( §2).",
"Our finding suggests that network capacity plays a crucial role in capturing structural dependencies with multiple attractors.",
"Nevertheless, we find that a strong character LSTM language model-which lacks explicit word representation and has to capture much longer sequential dependencies in order to learn non-local structural dependencies effectively-performs much worse in the number agreement task.",
"Given the strong performance of word-based LSTM language models, are there are any substantial benefits, in terms of number agreement accuracy, to explicitly modeling hierarchical structures as an inductive bias?",
"We discover that a certain class of LSTM language models that explicitly models syntactic structures, the recurrent neural network grammars (Dyer et al., 2016, RN-NGs) , considerably outperforms sequential LSTM language models for cases with multiple attractors ( §3).",
"We present experiments affirming that this gain is due to an explicit composition operator rather than the presence of predicted syntactic annotations.",
"Rather surprisingly, syntactic LSTM language models without explicit composition have no advantage over sequential LSTMs that operate on word sequences, although these models can nevertheless be excellent predictors of phrase structures (Choe and Charniak, 2016) .",
"Having established the importance of modeling structures, we explore the hypothesis that how we build the structure affects the model's ability to identify structural dependencies in English.",
"As RNNGs build phrase-structure trees through top-down operations, we propose extensions to the structure-building sequences and model architecture that enable left-corner (Henderson, 2003 (Henderson, , 2004 and bottom-up (Chelba and Jelinek, 2000; Emami and Jelinek, 2005) generation orders ( §4).",
"Extensive prior work has characterized topdown, left-corner, and bottom-up parsing strategies in terms of cognitive plausibility (Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992) and neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we move away from the realm of parsing and evaluate the three strategies as models of generation instead, and address the following empirical question: which generation order is most appropriately biased to model structural dependencies in English, as indicated by number agreement accuracy?",
"Our key finding is that the top-down generation outperforms left-corner and bottom-up variants for difficult cases with multiple attractors.",
"In theory, the three traversal strategies approximate the same chain rule that decompose the joint probability of words and phrase-structure trees, denoted as p(x, y), differently and as such will impose different biases on the learner.",
"In §4.3, we show that the three variants achieve similar perplexities on a held-out validation set.",
"As we observe different patterns in number agreement, this demonstrates that while perplexity can be a useful diagnostic tool, it may not be sensitive enough for comparing models in terms of how well they capture grammatical intuitions.",
"Number Agreement with LSTM Language Models We revisit the number agreement task with LSTMs trained on language modeling objectives, as proposed by Linzen et al.",
"(2016) .",
"Experimental Settings.",
"We use the same parsed Wikipedia corpus, verb inflectors, preprocessing steps, and dataset split as Linzen et al.",
"(2016) .",
"1 Word types beyond the most frequent 10,000 are converted to their respective POS tags.",
"We summarize the corpus statistics of the dataset, along with the test set distribution of the number of attractors, in Table 1 .",
"Similar to Linzen et al.",
"(2016) , we only include test cases where all intervening nouns are of the opposite number forms than the subject noun.",
"All models are implemented using the DyNet library (Neubig et al., 2017) .",
"Training was done using a language modeling objective that predicts the next word given the prefix; at test time we compute agreement error rates by comparing the probability of the correct verb form with the incorrect one.",
"We report performance of a few different LSTM hidden layer configurations, while other hyper-parameters are selected based on a grid search.",
"2 Following Linzen Table 2 indicates that, given enough capacity, LSTM language models without explicit syntactic supervision are able to perform well in number agreement.",
"For cases with multiple attractors, we observe that the LSTM language model with 50 hidden units trails behind its larger counterparts by a substantial margin despite comparable performance for zero attractor cases, suggesting that network capacity plays an especially important role in propagating relevant structural information across a large number of steps.",
"5 Our experiment independently derives the same finding as the recent work of Gulordava et al.",
"(2018) , who also find that LSTMs trained with language modeling objectives are able to learn number agreement well; here we additionally identify model capacity as one of the reasons for the discrepancy with the Linzen et al.",
"(2016) results.",
"While the pretrained large-scale language model of Jozefowicz et al.",
"(2016) has certain advantages in terms of model capacity, more training data, and richer vocabulary, we suspect that the poorer performance is due to differences between their training domain and the number agreement testing domain, although the model still performs reasonably well in the number agreement test set.",
"Prior work has confirmed the notion that, in many cases, statistical models are able to achieve good performance under some aggregate metric by overfitting to patterns that are predictive in most cases, often at the expense of more difficult, infrequent instances that require deeper language understanding abilities (Rimell et al., 2009; Jia and Liang, 2017) .",
"In the vast majority of cases, structural dependencies between subjects and verbs highly overlap with sequential dependencies (Table 1).",
"Nevertheless, the fact that number agreement accuracy gets worse as the number of attractors increases is consistent with a sequential recency bias in LSTMs: under this conjecture, identifying the correct structural dependency becomes harder when there are more adjacent nouns of different number forms than the true subject.",
"If the sequential recency conjecture is correct, then LSTMs would perform worse when the structural dependency is more distant in the sequences, compared to cases where the structural dependency is more adjacent.",
"We empirically test this conjecture by running a strong character-based LSTM language model of that achieved state of the art results on EnWiki8 from the Hutter Prize dataset (Hutter, 2012) , with 1,800 hidden units and 10 million parameters.",
"The character LSTM is trained, validated, and tested 6 on the same split of the Linzen et al.",
"(2016) number agreement dataset.",
"A priori, we expect that number agreement is harder for character LSTMs for two reasons.",
"First, character LSTMs lack explicit word representaforms much better for cases with multiple attractors.",
"tions, thus succeeding in this task requires identifying structural dependencies between two sequences of character tokens, while word-based LSTMs only need to resolve dependencies between word tokens.",
"Second, by nature of modeling characters, non-local structural dependencies are sequentially further apart than in the wordbased language model.",
"On the other hand, character LSTMs have the ability to exploit and share informative morphological cues, such as the fact that plural nouns in English tend to end with 's'.",
"As demonstrated on the last row of Table 2 , we find that the character LSTM language model performs much worse at number agreement with multiple attractors compared to its word-based counterparts.",
"This finding is consistent with that of Sennrich (2017) , who find that character-level decoders in neural machine translation perform worse than subword models in capturing morphosyntactic agreement.",
"To some extent, our finding demonstrates the limitations that character LSTMs face in learning structure from language modeling objectives, despite earlier evidence that character LSTM language models are able to implicitly acquire a lexicon (Le Godais et al., 2017) .",
"Number Agreement with RNNGs Given the strong performance of sequential LSTMs in number agreement, is there any further benefit to explicitly modeling hierarchical structures?",
"We focus on recurrent neural network grammars (Dyer et al., 2016, RNNGs) , which jointly model the probability of phrase-structure trees and strings, p(x, y), through structurebuilding actions and explicit compositions for representing completed constituents.",
"Our choice of RNNGs is motivated by the findings of Kuncoro et al.",
"(2017) , who find evidence for syntactic headedness in RNNG phrasal representations.",
"Intuitively, the ability to learn heads is beneficial for this task, as the representation for the noun phrase \"The flowers in the vase\" would be similar to the syntactic head flowers rather than vase.",
"In some sense, the composition operator can be understood as injecting a structural recency bias into the model design, as subjects and verbs that are sequentially apart are encouraged to be close together in the RNNGs' representation.",
"Recurrent Neural Network Grammars RNNGs (Dyer et al., 2016) are language models that estimate the joint probability of string terminals and phrase-structure tree nonterminals.",
"Here we use stack-only RNNGs that achieve better perplexity and parsing performance (Kuncoro et al., 2017) .",
"Given the current stack configuration, the objective function of RNNGs is to predict the correct structure-building operation according to a top-down, left-to-right traversal of the phrasestructure tree; a partial traversal for the input sentence \"The flowers in the vase are blooming\" is illustrated in Fig.",
"3(a) .",
"7 The structural inductive bias of RNNGs derives from an explicit composition operator that represents completed constituents; for instance, the constituent (NP The flowers) is represented by a single composite element on the stack, rather than as four separate symbols.",
"During each REDUCE action, the topmost stack elements that belong to the new constituent are popped from the stack and then composed by the composition function; the composed symbol is then pushed back into the stack.",
"The model is trained in an end-to-end manner by minimizing the cross-entropy loss relative to a sample of gold trees.",
"Experiments Here we summarize the experimental settings of running RNNGs on the number agreement dataset and discuss the empirical findings.",
"Experimental settings.",
"We obtain phrasestructure trees for the Linzen et al.",
"(2016) dataset using a publicly available discriminative model 8 trained on the Penn Treebank (Marcus et al., 1993) .",
"At training time, we use these predicted trees to derive action sequences on the training set, and train the RNNG model on these sequences.",
"9 At test time, we compare the probabilities of the correct and incorrect verb forms given the prefix, which now includes both nonterminal and terminal symbols.",
"An example of the stack contents (i.e.",
"the prefix) when predicting the verb is provided in Fig.",
"3(a) .",
"We similarly run a grid search over the same hyper-parameter range as the sequential LSTM and compare the results with the strongest sequential LSTM baseline from §2.",
"Figure 2 : Number agreement error rates for sequential LSTM language models (left), sequential syntactic LSTM language models (Choe and Charniak, 2016, center), and RNNGs (right).",
"Discussion.",
"Fig.",
"2 shows that RNNGs (rightmost) achieve much better number agreement accuracy compared to LSTM language models (leftmost) for difficult cases with four and five attractors, with around 30% error rate reductions, along with a 13% error rate reduction (from 9% to 7.8%) for three attractors.",
"We attribute the slightly worse performance of RNNGs on cases with zero and one attractor to the presence of intervening structure-building actions that separate the subject and the verb, such as a REDUCE (step 6 in Fig.",
"3(a) ) action to complete the noun phrase and at least one action to predict a verb phrase (step 15 in Fig.",
"3(a) ) before the verb itself is introduced, while LSTM language models benefit from shorter dependencies for zero and one attractor cases.",
"The performance gain of RNNGs might arise from two potential causes.",
"First, RNNGs have access to predicted syntactic annotations, while LSTM language models operate solely on word sequences.",
"Second, RNNGs incorporate explicit compositions, which encourage hierarhical representations and potentially the discovery of syntactic (rather than sequential) dependencies.",
"Would LSTMs that have access to syntactic annotations, but without the explicit composition function, benefit from the same performance gain as RNNGs?",
"To answer this question, we run sequential LSTMs over the same phrase-structure trees (Choe and Charniak, 2016) , similarly estimating the joint probability of phrase-structure nonterminals and string terminals but without an explicit composition operator.",
"Taking the example in Fig.",
"3(a) , the sequential syntactic LSTM would have fifteen 10 symbols on the LSTM when predicting the verb, as opposed to three symbols in the case of RNNGs' stack LSTM.",
"In theory, the sequential LSTM over the phrase-structure trees (Choe and Charniak, 2016) may be able to incorporate a similar, albeit implicit, composition process as RNNGs and consequently derive similarly syntactic heads, although there is no inductive bias that explicitly encourages such process.",
"Fig.",
"2 suggests that the sequential syntactic LSTMs (center) perform comparably with sequential LSTMs without syntax for multiple attractor cases, and worse than RNNGs for nearly all attractors; the gap is highest for multiple attractors.",
"This result showcases the importance of an explicit composition operator and hierarchical representations in identifying structural dependencies, as indicated by number agreement accuracy.",
"Our finding is consistent with the recent work of Yogatama et al.",
"(2018) , who find that introducing elements of hierarchical modeling through a stackstructured memory is beneficial for number agreement, outperforming LSTM language models and attention-augmented variants by increasing margins as the number of attractor grows.",
"Further Analysis In order to better interpret the results, we conduct further analysis into the perplexities of each model, followed by a discussion on the effect of incrementality constraints on the RNNG when predicting number agreement.",
"Perplexity.",
"To what extent does the success of RNNGs in the number agreement task with multiple attractors correlate with better performance under the perplexity metric?",
"We answer this question by using an importance sampling marginalization procedure (Dyer et al., 2016) to obtain an estimate of p(x) under both RNNGs and the sequential syntactic LSTM model.",
"Following Dyer et al.",
"(2016) , for each sentence on the validation set we sample 100 candidate trees from a discriminative model 11 as our proposal distribution.",
"As demonstrated in Table 3 , the LSTM language model has the lowest validation set perplexity despite substantially worse performance than RN-NGs in number agreement with multiple attractors, suggesting that lower perplexity is not neces-sarily correlated with number agreement success.",
"Incrementality constraints.",
"As the syntactic prefix was derived from a discriminative model that has access to unprocessed words, one potential concern is that this prefix might violate the incrementality constraints and benefit the RNNG over the LSTM language model.",
"To address this concern, we remark that the empirical evidence from Fig.",
"2 and Table 3 indicates that the LSTM language model without syntactic annotation outperforms the sequential LSTM with syntactic annotation in terms of both perplexity and number agreement throughout nearly all attractor settings, suggesting that the predicted syntactic prefix does not give any unfair advantage to the syntactic models.",
"Furthermore, we run an experiment where the syntactic prefix is instead derived from an incremental beam search procedure of Fried et al.",
"(2017) .",
"12 To this end, we take the highest scoring beam entry at the time that the verb is generated to be the syntactic prefix; this procedure is applied to both the correct and incorrect verb forms.",
"13 We then similarly compare the probabilities of the correct and incorrect verb form given each respective syntactic prefix to obtain number agreement accuracy.",
"Our finding suggests that using the fully incremental tree prefix leads to even better RNNG number agreement performance for four and five attractors, achieving 7.1% and 8.2% error rates, respectively, compared to 9.4% and 12% for the RNNG error rates in Fig.",
"2 .",
"Top-Down, Left-Corner, and Bottom-Up Traversals In this section, we propose two new variants of RNNGs that construct trees using a different con- 12 As the beam search procedure is time-consuming, we randomly sample 500 cases for each attractor and compute the number agreement accuracy on these samples.",
"13 Consequently, the correct and incorrect forms of the sentence might have different partial trees, as the highest scoring beam entries may be different for each alternative.",
"struction order than the top-down, left-to-right order used above.",
"These are a bottom-up construction order ( §4.1) and a left-corner construction order ( §4.2), analogous to the well-known parsing strategies (e.g.",
"Hale, 2014, chapter 3).",
"They differ from these classic strategies insofar as they do not announce the phrase-structural content of an entire branch at the same time, adopting instead a node-by-node enumeration reminescent of Markov Grammars (Charniak, 1997) .",
"This stepby-step arrangement extends to the derived string as well; since all variants generate words from left to right, the models can be compared using number agreement as a diagnostic.",
"14 Here we state our hypothesis on why the build order matters.",
"The three generation strategies represent different chain rule decompositions of the joint probability of strings and phrase-structure trees, thereby imposing different biases on the learner.",
"Earlier work in parsing has characterized the plausibility of top-down, left-corner, and bottom-up strategies as viable candidates of human sentence processing, especially in terms of memory constraints and human difficulties with center embedding constructions (Johnson-Laird, 1983; Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992 , inter alia), along with neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we cast the three strategies as models of language generation (Manning and Carpenter, 1997), and focus on the empirical question: which generation order has the most appropriate bias in modeling non-local structural dependencies in English?",
"These alternative orders organize the learning problem so as to yield intermediate states in generation that condition on different aspects of the grammatical structure.",
"In number agreement, this amounts to making an agreement controller, such as the word flowers in Fig.",
"3 , more or less salient.",
"If it is more salient, the model should be better-able to inflect the main verb in agreement with this controller, without getting distracted by the attractors.",
"The three proposed build orders are compared in Fig.",
"3 , showing the respective configurations (i.e.",
"the prefix) when generating the main verb in a sentence with a single attractor.",
"15 In ad-dition, we show concrete action sequences for a simpler sentence in each section.",
"Bottom-Up Traversal In bottom-up traversals, phrases are recursively constructed and labeled with the nonterminal type once all their daughters have been built, as illustrated in Fig.",
"4 .",
"Bottom-up traversals benefit from shorter stack depths compared to top-down due to the lack of incomplete nonterminals.",
"As the commitment to label the nonterminal type of a phrase is delayed until its constituents are complete, this means that the generation of a child node cannot condition on the label of its parent node.",
"In n-ary branching trees, bottom-up completion of constituents requires a procedure for determining how many of the most recent elements on the stack should be daughters of the node that is being constructed.",
"16 Conceptually, rather than having a single REDUCE operation as we have before, we have a complex REDUCE(X, n) operation that must determine the type of the constituent (i.e., X) as well as the number of daughters (i.e., n).",
"In step 5 of Fig.",
"4 , the newly formed NP constituent only covers the terminal worms, and neither the unattached terminal eats nor the constituent (NP The fox) is part of the new noun phrase.",
"We implement this extent decision using a stick-breaking construction-using the stack LSTM encoding, a single-layer feedforward network, and a logistic output layer-which decides whether the top element on the stack should be the leftmost child of the new constituent (i.e.",
"whether or not the new constituent is complete after popping the current topmost stack element), as illustrated in Fig.",
"5 .",
"If not, the process is then repeated after the topmost stack element is popped.",
"Once the extent of the new nonterminal has been decided, we parameterize the decision of the nonterminal label type; in Fig.",
"5 this is an NP.",
"A second difference to top-down generation is that when a single constituent remains on the stack, the sentence is not necessarily complete (see step 3 of Fig.",
"4 for examples where this happens).",
"We thus introduce an explicit STOP action (step 8, Fig.",
"4) , indicating the tree is complete, which is only assigned non-zero probability when the stack has a during the history of the full generation process vary considerably in the invariances and the kinds of actions they predict.",
"16 This mechanism is not necessary with strictly binary branching trees, since each new nonterminal always consists of the two children at the top of the stack.",
"Left-Corner Traversal Left-corner traversals combine some aspects of top-down and bottom-up processing.",
"As illustrated in Fig.",
"6 , this works by first generating the leftmost terminal of the tree, The (step 0), before proceeding bottom-up to predict its parent NP (step 1) and then top-down to predict the rest of its children (step 2).",
"A REDUCE action similarly calls the composition operator once the phrase is complete (e.g.",
"step 3).",
"The complete constituent (NP The fox) is the leftmost child of its parent node, thus an NT SW(S) action is done next (step 4).",
"The NT SW(X) action is similar to the NT(X) from the top-down generator, in that it introduces an open nonterminal node and must be matched later by a corresponding REDUCE operation, but, in addition, swaps the two topmost elements at the top of the stack.",
"This is necessary because the parent nonterminal node is not built until after its left-most child has been constructed.",
"In step 1 of Fig.",
"6 , with a single element The on the stack, the action NT SW(NP) adds the open nonterminal symbol NP to become the topmost stack element, but after applying the swap operator the stack now contains (NP | The (step 2).",
"Experiments We optimize the hyper-parameters of each RNNG variant using grid searches based on validation set perplexity.",
"Table 4 summarizes average stack depths and perplexities 17 on the Linzen et al.",
"(2016) validation set.",
"We evaluate each of the variants in terms of number agreement accuracy as an evidence of its suitability to model structural dependencies in English, presented in Table 5 .",
"To account for randomness in training, we report the error rate summary statistics of ten different runs.",
"Avg.",
"(±sdev)/min/max n=2 n=3 n=4 LM 5.8(±0.2)/5.5/6.0 9.6(±0.7)/8.8/10.1 14.1(±1.2)/13.0/15.3 TD 5.5(±0.4)/4.9/5.8 7.8(±0.6)/7.4/8.0 8.9(±1.1)/7.9/9.8 LC 5.4(±0.3)/5.2/5.5 8.2(±0.4)/7.9/8.7 9.9(±1.3)/8.8/11.5 BU 5.7(±0.3) 5.5/5.8 8.5(±0.7)/8.0/9.3 9.7(±1.1)/9.0/11.3 Table 5 : Number agreement error rates for topdown (TD), left-corner (LC), and bottom-up (BU) RNNGs, broken down by the number of attractors.",
"LM indicates the best sequential language model baseline ( §2).",
"We report the mean, standard deviation, and minimum/maximum of 10 different random seeds of each model.",
"Discussion.",
"In Table 5 , we focus on empirical results for cases where the structural dependencies matter the most, corresponding to cases with two, three, and four attractors.",
"All three RNNG variants outperform the sequential LSTM language model baseline for these cases.",
"Nevertheless, the top-down variant outperforms both left-corner and bottom-up strategies for difficult cases with three or more attractors, suggesting that the top-down strategy is most appropriately biased to model difficult number agreement dependencies in English.",
"We run an approximate randomization test by stratifying the output and permuting within each stratum (Yeh, 2000) and find that, for four attractors, the performance difference between the top-down RNNG and the other variants is statistically significant at p < 0.05.",
"The success of the top-down traversal in the domain of number-agreement prediction is consistent with a classical view in parsing that argues top-down parsing is the most human-like parsing strategy since it is the most anticipatory.",
"Only anticipatory representations, it is said, could explain the rapid, incremental processing that humans seem to exhibit (Marslen-Wilson, 1973; Tanenhaus et al., 1995) ; this line of thinking similarly motivates Charniak (2010) , among others.",
"While most work in this domain has been concerned with the parsing problem, our findings suggest that anticipatory mechanisms are also beneficial in capturing structural dependencies in language modeling.",
"We note that our results are achieved using models that, in theory, are able to condition on the entire derivation history, while earlier work in sentence processing has focused on cognitive memory considerations, such as the memory-bounded model of Schuler et al.",
"(2010) .",
"Conclusion Given enough capacity, LSTMs trained on language modeling objectives are able to learn syntax-sensitive dependencies, as evidenced by accurate number agreement accuracy with multiple attractors.",
"Despite this strong performance, we discover explicit modeling of structure does improve the model's ability to discover non-local structural dependencies when determining the distribution over subsequent word generation.",
"Recurrent neural network grammars (RNNGs), which jointly model phrase-structure trees and strings and employ an explicit composition operator, substantially outperform LSTM language models and syntactic language models without explicit compositions; this highlights the importance of a hierarchical inductive bias in capturing structural dependencies.",
"We explore the possibility that how the structure is built affects number agreement performance.",
"Through novel extensions to RNNGs that enable the use of left-corner and bottom-up generation strategies, we discover that this is indeed the case: the three RNNG variants have different generalization properties for number agreement, with the top-down traversal strategy performing best for cases with multiple attractors."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"4.3",
"5"
],
"paper_header_content": [
"Introduction",
"Number Agreement with LSTM Language Models",
"Number Agreement with RNNGs",
"Recurrent Neural Network Grammars",
"Experiments",
"Further Analysis",
"Top-Down, Left-Corner, and Bottom-Up Traversals",
"Bottom-Up Traversal",
"Left-Corner Traversal",
"Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-30#paper-1041#slide-24 | Quick Illustration of the Differences Top Down | LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | [] |
GEM-SciDuet-train-30#paper-1041#slide-25 | 1041 | LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modeling Structure Makes Them Better | Language exhibits hierarchical structure, but recent work using a subject-verb agreement diagnostic argued that state-ofthe-art language models, LSTMs, fail to learn long-range syntax-sensitive dependencies. Using the same diagnostic, we show that, in fact, LSTMs do succeed in learning such dependencies-provided they have enough capacity. We then explore whether models that have access to explicit syntactic information learn agreement more effectively, and how the way in which this structural information is incorporated into the model impacts performance. We find that the mere presence of syntactic information does not improve accuracy, but when model architecture is determined by syntax, number agreement is improved. Further, we find that the choice of how syntactic structure is built affects how well number agreement is learned: top-down construction outperforms leftcorner and bottom-up variants in capturing long-distance structural dependencies. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189
],
"paper_content_text": [
"Introduction Recurrent neural networks (RNNs) are remarkably effective models of sequential data.",
"Recent years have witnessed the widespread adoption of recurrent architectures such as LSTMs (Hochreiter and Schmidhuber, 1997) in various NLP tasks, with state of the art results in language modeling and conditional generation tasks like machine translation (Bahdanau et al., 2015) and text summarization (See et al., 2017) .",
"Here we revisit the question asked by Linzen et al.",
"(2016) : as RNNs model word sequences without explicit notions of hierarchical structure, Figure 1 : An example of the number agreement task with two attractors and a subject-verb distance of five.",
"to what extent are these models able to learn non-local syntactic dependencies in natural language?",
"Identifying number agreement between subjects and verbs-especially in the presence of attractors-can be understood as a cognitivelymotivated probe that seeks to distinguish hierarchical theories from sequential ones, as models that rely on sequential cues like the most recent noun would favor the incorrect verb form.",
"We provide an example of this task in Fig.",
"1 , where the plural form of the verb have agrees with the distant subject parts, rather than the adjacent attractors (underlined) of the singular form.",
"Contrary to the findings of Linzen et al.",
"(2016) , our experiments suggest that sequential LSTMs are able to capture structural dependencies to a large extent, even for cases with multiple attractors ( §2).",
"Our finding suggests that network capacity plays a crucial role in capturing structural dependencies with multiple attractors.",
"Nevertheless, we find that a strong character LSTM language model-which lacks explicit word representation and has to capture much longer sequential dependencies in order to learn non-local structural dependencies effectively-performs much worse in the number agreement task.",
"Given the strong performance of word-based LSTM language models, are there are any substantial benefits, in terms of number agreement accuracy, to explicitly modeling hierarchical structures as an inductive bias?",
"We discover that a certain class of LSTM language models that explicitly models syntactic structures, the recurrent neural network grammars (Dyer et al., 2016, RN-NGs) , considerably outperforms sequential LSTM language models for cases with multiple attractors ( §3).",
"We present experiments affirming that this gain is due to an explicit composition operator rather than the presence of predicted syntactic annotations.",
"Rather surprisingly, syntactic LSTM language models without explicit composition have no advantage over sequential LSTMs that operate on word sequences, although these models can nevertheless be excellent predictors of phrase structures (Choe and Charniak, 2016) .",
"Having established the importance of modeling structures, we explore the hypothesis that how we build the structure affects the model's ability to identify structural dependencies in English.",
"As RNNGs build phrase-structure trees through top-down operations, we propose extensions to the structure-building sequences and model architecture that enable left-corner (Henderson, 2003 (Henderson, , 2004 and bottom-up (Chelba and Jelinek, 2000; Emami and Jelinek, 2005) generation orders ( §4).",
"Extensive prior work has characterized topdown, left-corner, and bottom-up parsing strategies in terms of cognitive plausibility (Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992) and neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we move away from the realm of parsing and evaluate the three strategies as models of generation instead, and address the following empirical question: which generation order is most appropriately biased to model structural dependencies in English, as indicated by number agreement accuracy?",
"Our key finding is that the top-down generation outperforms left-corner and bottom-up variants for difficult cases with multiple attractors.",
"In theory, the three traversal strategies approximate the same chain rule that decompose the joint probability of words and phrase-structure trees, denoted as p(x, y), differently and as such will impose different biases on the learner.",
"In §4.3, we show that the three variants achieve similar perplexities on a held-out validation set.",
"As we observe different patterns in number agreement, this demonstrates that while perplexity can be a useful diagnostic tool, it may not be sensitive enough for comparing models in terms of how well they capture grammatical intuitions.",
"Number Agreement with LSTM Language Models We revisit the number agreement task with LSTMs trained on language modeling objectives, as proposed by Linzen et al.",
"(2016) .",
"Experimental Settings.",
"We use the same parsed Wikipedia corpus, verb inflectors, preprocessing steps, and dataset split as Linzen et al.",
"(2016) .",
"1 Word types beyond the most frequent 10,000 are converted to their respective POS tags.",
"We summarize the corpus statistics of the dataset, along with the test set distribution of the number of attractors, in Table 1 .",
"Similar to Linzen et al.",
"(2016) , we only include test cases where all intervening nouns are of the opposite number forms than the subject noun.",
"All models are implemented using the DyNet library (Neubig et al., 2017) .",
"Training was done using a language modeling objective that predicts the next word given the prefix; at test time we compute agreement error rates by comparing the probability of the correct verb form with the incorrect one.",
"We report performance of a few different LSTM hidden layer configurations, while other hyper-parameters are selected based on a grid search.",
"2 Following Linzen Table 2 indicates that, given enough capacity, LSTM language models without explicit syntactic supervision are able to perform well in number agreement.",
"For cases with multiple attractors, we observe that the LSTM language model with 50 hidden units trails behind its larger counterparts by a substantial margin despite comparable performance for zero attractor cases, suggesting that network capacity plays an especially important role in propagating relevant structural information across a large number of steps.",
"5 Our experiment independently derives the same finding as the recent work of Gulordava et al.",
"(2018) , who also find that LSTMs trained with language modeling objectives are able to learn number agreement well; here we additionally identify model capacity as one of the reasons for the discrepancy with the Linzen et al.",
"(2016) results.",
"While the pretrained large-scale language model of Jozefowicz et al.",
"(2016) has certain advantages in terms of model capacity, more training data, and richer vocabulary, we suspect that the poorer performance is due to differences between their training domain and the number agreement testing domain, although the model still performs reasonably well in the number agreement test set.",
"Prior work has confirmed the notion that, in many cases, statistical models are able to achieve good performance under some aggregate metric by overfitting to patterns that are predictive in most cases, often at the expense of more difficult, infrequent instances that require deeper language understanding abilities (Rimell et al., 2009; Jia and Liang, 2017) .",
"In the vast majority of cases, structural dependencies between subjects and verbs highly overlap with sequential dependencies (Table 1).",
"Nevertheless, the fact that number agreement accuracy gets worse as the number of attractors increases is consistent with a sequential recency bias in LSTMs: under this conjecture, identifying the correct structural dependency becomes harder when there are more adjacent nouns of different number forms than the true subject.",
"If the sequential recency conjecture is correct, then LSTMs would perform worse when the structural dependency is more distant in the sequences, compared to cases where the structural dependency is more adjacent.",
"We empirically test this conjecture by running a strong character-based LSTM language model of that achieved state of the art results on EnWiki8 from the Hutter Prize dataset (Hutter, 2012) , with 1,800 hidden units and 10 million parameters.",
"The character LSTM is trained, validated, and tested 6 on the same split of the Linzen et al.",
"(2016) number agreement dataset.",
"A priori, we expect that number agreement is harder for character LSTMs for two reasons.",
"First, character LSTMs lack explicit word representaforms much better for cases with multiple attractors.",
"tions, thus succeeding in this task requires identifying structural dependencies between two sequences of character tokens, while word-based LSTMs only need to resolve dependencies between word tokens.",
"Second, by nature of modeling characters, non-local structural dependencies are sequentially further apart than in the wordbased language model.",
"On the other hand, character LSTMs have the ability to exploit and share informative morphological cues, such as the fact that plural nouns in English tend to end with 's'.",
"As demonstrated on the last row of Table 2 , we find that the character LSTM language model performs much worse at number agreement with multiple attractors compared to its word-based counterparts.",
"This finding is consistent with that of Sennrich (2017) , who find that character-level decoders in neural machine translation perform worse than subword models in capturing morphosyntactic agreement.",
"To some extent, our finding demonstrates the limitations that character LSTMs face in learning structure from language modeling objectives, despite earlier evidence that character LSTM language models are able to implicitly acquire a lexicon (Le Godais et al., 2017) .",
"Number Agreement with RNNGs Given the strong performance of sequential LSTMs in number agreement, is there any further benefit to explicitly modeling hierarchical structures?",
"We focus on recurrent neural network grammars (Dyer et al., 2016, RNNGs) , which jointly model the probability of phrase-structure trees and strings, p(x, y), through structurebuilding actions and explicit compositions for representing completed constituents.",
"Our choice of RNNGs is motivated by the findings of Kuncoro et al.",
"(2017) , who find evidence for syntactic headedness in RNNG phrasal representations.",
"Intuitively, the ability to learn heads is beneficial for this task, as the representation for the noun phrase \"The flowers in the vase\" would be similar to the syntactic head flowers rather than vase.",
"In some sense, the composition operator can be understood as injecting a structural recency bias into the model design, as subjects and verbs that are sequentially apart are encouraged to be close together in the RNNGs' representation.",
"Recurrent Neural Network Grammars RNNGs (Dyer et al., 2016) are language models that estimate the joint probability of string terminals and phrase-structure tree nonterminals.",
"Here we use stack-only RNNGs that achieve better perplexity and parsing performance (Kuncoro et al., 2017) .",
"Given the current stack configuration, the objective function of RNNGs is to predict the correct structure-building operation according to a top-down, left-to-right traversal of the phrasestructure tree; a partial traversal for the input sentence \"The flowers in the vase are blooming\" is illustrated in Fig.",
"3(a) .",
"7 The structural inductive bias of RNNGs derives from an explicit composition operator that represents completed constituents; for instance, the constituent (NP The flowers) is represented by a single composite element on the stack, rather than as four separate symbols.",
"During each REDUCE action, the topmost stack elements that belong to the new constituent are popped from the stack and then composed by the composition function; the composed symbol is then pushed back into the stack.",
"The model is trained in an end-to-end manner by minimizing the cross-entropy loss relative to a sample of gold trees.",
"Experiments Here we summarize the experimental settings of running RNNGs on the number agreement dataset and discuss the empirical findings.",
"Experimental settings.",
"We obtain phrasestructure trees for the Linzen et al.",
"(2016) dataset using a publicly available discriminative model 8 trained on the Penn Treebank (Marcus et al., 1993) .",
"At training time, we use these predicted trees to derive action sequences on the training set, and train the RNNG model on these sequences.",
"9 At test time, we compare the probabilities of the correct and incorrect verb forms given the prefix, which now includes both nonterminal and terminal symbols.",
"An example of the stack contents (i.e.",
"the prefix) when predicting the verb is provided in Fig.",
"3(a) .",
"We similarly run a grid search over the same hyper-parameter range as the sequential LSTM and compare the results with the strongest sequential LSTM baseline from §2.",
"Figure 2 : Number agreement error rates for sequential LSTM language models (left), sequential syntactic LSTM language models (Choe and Charniak, 2016, center), and RNNGs (right).",
"Discussion.",
"Fig.",
"2 shows that RNNGs (rightmost) achieve much better number agreement accuracy compared to LSTM language models (leftmost) for difficult cases with four and five attractors, with around 30% error rate reductions, along with a 13% error rate reduction (from 9% to 7.8%) for three attractors.",
"We attribute the slightly worse performance of RNNGs on cases with zero and one attractor to the presence of intervening structure-building actions that separate the subject and the verb, such as a REDUCE (step 6 in Fig.",
"3(a) ) action to complete the noun phrase and at least one action to predict a verb phrase (step 15 in Fig.",
"3(a) ) before the verb itself is introduced, while LSTM language models benefit from shorter dependencies for zero and one attractor cases.",
"The performance gain of RNNGs might arise from two potential causes.",
"First, RNNGs have access to predicted syntactic annotations, while LSTM language models operate solely on word sequences.",
"Second, RNNGs incorporate explicit compositions, which encourage hierarhical representations and potentially the discovery of syntactic (rather than sequential) dependencies.",
"Would LSTMs that have access to syntactic annotations, but without the explicit composition function, benefit from the same performance gain as RNNGs?",
"To answer this question, we run sequential LSTMs over the same phrase-structure trees (Choe and Charniak, 2016) , similarly estimating the joint probability of phrase-structure nonterminals and string terminals but without an explicit composition operator.",
"Taking the example in Fig.",
"3(a) , the sequential syntactic LSTM would have fifteen 10 symbols on the LSTM when predicting the verb, as opposed to three symbols in the case of RNNGs' stack LSTM.",
"In theory, the sequential LSTM over the phrase-structure trees (Choe and Charniak, 2016) may be able to incorporate a similar, albeit implicit, composition process as RNNGs and consequently derive similarly syntactic heads, although there is no inductive bias that explicitly encourages such process.",
"Fig.",
"2 suggests that the sequential syntactic LSTMs (center) perform comparably with sequential LSTMs without syntax for multiple attractor cases, and worse than RNNGs for nearly all attractors; the gap is highest for multiple attractors.",
"This result showcases the importance of an explicit composition operator and hierarchical representations in identifying structural dependencies, as indicated by number agreement accuracy.",
"Our finding is consistent with the recent work of Yogatama et al.",
"(2018) , who find that introducing elements of hierarchical modeling through a stackstructured memory is beneficial for number agreement, outperforming LSTM language models and attention-augmented variants by increasing margins as the number of attractor grows.",
"Further Analysis In order to better interpret the results, we conduct further analysis into the perplexities of each model, followed by a discussion on the effect of incrementality constraints on the RNNG when predicting number agreement.",
"Perplexity.",
"To what extent does the success of RNNGs in the number agreement task with multiple attractors correlate with better performance under the perplexity metric?",
"We answer this question by using an importance sampling marginalization procedure (Dyer et al., 2016) to obtain an estimate of p(x) under both RNNGs and the sequential syntactic LSTM model.",
"Following Dyer et al.",
"(2016) , for each sentence on the validation set we sample 100 candidate trees from a discriminative model 11 as our proposal distribution.",
"As demonstrated in Table 3 , the LSTM language model has the lowest validation set perplexity despite substantially worse performance than RN-NGs in number agreement with multiple attractors, suggesting that lower perplexity is not neces-sarily correlated with number agreement success.",
"Incrementality constraints.",
"As the syntactic prefix was derived from a discriminative model that has access to unprocessed words, one potential concern is that this prefix might violate the incrementality constraints and benefit the RNNG over the LSTM language model.",
"To address this concern, we remark that the empirical evidence from Fig.",
"2 and Table 3 indicates that the LSTM language model without syntactic annotation outperforms the sequential LSTM with syntactic annotation in terms of both perplexity and number agreement throughout nearly all attractor settings, suggesting that the predicted syntactic prefix does not give any unfair advantage to the syntactic models.",
"Furthermore, we run an experiment where the syntactic prefix is instead derived from an incremental beam search procedure of Fried et al.",
"(2017) .",
"12 To this end, we take the highest scoring beam entry at the time that the verb is generated to be the syntactic prefix; this procedure is applied to both the correct and incorrect verb forms.",
"13 We then similarly compare the probabilities of the correct and incorrect verb form given each respective syntactic prefix to obtain number agreement accuracy.",
"Our finding suggests that using the fully incremental tree prefix leads to even better RNNG number agreement performance for four and five attractors, achieving 7.1% and 8.2% error rates, respectively, compared to 9.4% and 12% for the RNNG error rates in Fig.",
"2 .",
"Top-Down, Left-Corner, and Bottom-Up Traversals In this section, we propose two new variants of RNNGs that construct trees using a different con- 12 As the beam search procedure is time-consuming, we randomly sample 500 cases for each attractor and compute the number agreement accuracy on these samples.",
"13 Consequently, the correct and incorrect forms of the sentence might have different partial trees, as the highest scoring beam entries may be different for each alternative.",
"struction order than the top-down, left-to-right order used above.",
"These are a bottom-up construction order ( §4.1) and a left-corner construction order ( §4.2), analogous to the well-known parsing strategies (e.g.",
"Hale, 2014, chapter 3).",
"They differ from these classic strategies insofar as they do not announce the phrase-structural content of an entire branch at the same time, adopting instead a node-by-node enumeration reminescent of Markov Grammars (Charniak, 1997) .",
"This stepby-step arrangement extends to the derived string as well; since all variants generate words from left to right, the models can be compared using number agreement as a diagnostic.",
"14 Here we state our hypothesis on why the build order matters.",
"The three generation strategies represent different chain rule decompositions of the joint probability of strings and phrase-structure trees, thereby imposing different biases on the learner.",
"Earlier work in parsing has characterized the plausibility of top-down, left-corner, and bottom-up strategies as viable candidates of human sentence processing, especially in terms of memory constraints and human difficulties with center embedding constructions (Johnson-Laird, 1983; Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992 , inter alia), along with neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we cast the three strategies as models of language generation (Manning and Carpenter, 1997), and focus on the empirical question: which generation order has the most appropriate bias in modeling non-local structural dependencies in English?",
"These alternative orders organize the learning problem so as to yield intermediate states in generation that condition on different aspects of the grammatical structure.",
"In number agreement, this amounts to making an agreement controller, such as the word flowers in Fig.",
"3 , more or less salient.",
"If it is more salient, the model should be better-able to inflect the main verb in agreement with this controller, without getting distracted by the attractors.",
"The three proposed build orders are compared in Fig.",
"3 , showing the respective configurations (i.e.",
"the prefix) when generating the main verb in a sentence with a single attractor.",
"15 In ad-dition, we show concrete action sequences for a simpler sentence in each section.",
"Bottom-Up Traversal In bottom-up traversals, phrases are recursively constructed and labeled with the nonterminal type once all their daughters have been built, as illustrated in Fig.",
"4 .",
"Bottom-up traversals benefit from shorter stack depths compared to top-down due to the lack of incomplete nonterminals.",
"As the commitment to label the nonterminal type of a phrase is delayed until its constituents are complete, this means that the generation of a child node cannot condition on the label of its parent node.",
"In n-ary branching trees, bottom-up completion of constituents requires a procedure for determining how many of the most recent elements on the stack should be daughters of the node that is being constructed.",
"16 Conceptually, rather than having a single REDUCE operation as we have before, we have a complex REDUCE(X, n) operation that must determine the type of the constituent (i.e., X) as well as the number of daughters (i.e., n).",
"In step 5 of Fig.",
"4 , the newly formed NP constituent only covers the terminal worms, and neither the unattached terminal eats nor the constituent (NP The fox) is part of the new noun phrase.",
"We implement this extent decision using a stick-breaking construction-using the stack LSTM encoding, a single-layer feedforward network, and a logistic output layer-which decides whether the top element on the stack should be the leftmost child of the new constituent (i.e.",
"whether or not the new constituent is complete after popping the current topmost stack element), as illustrated in Fig.",
"5 .",
"If not, the process is then repeated after the topmost stack element is popped.",
"Once the extent of the new nonterminal has been decided, we parameterize the decision of the nonterminal label type; in Fig.",
"5 this is an NP.",
"A second difference to top-down generation is that when a single constituent remains on the stack, the sentence is not necessarily complete (see step 3 of Fig.",
"4 for examples where this happens).",
"We thus introduce an explicit STOP action (step 8, Fig.",
"4) , indicating the tree is complete, which is only assigned non-zero probability when the stack has a during the history of the full generation process vary considerably in the invariances and the kinds of actions they predict.",
"16 This mechanism is not necessary with strictly binary branching trees, since each new nonterminal always consists of the two children at the top of the stack.",
"Left-Corner Traversal Left-corner traversals combine some aspects of top-down and bottom-up processing.",
"As illustrated in Fig.",
"6 , this works by first generating the leftmost terminal of the tree, The (step 0), before proceeding bottom-up to predict its parent NP (step 1) and then top-down to predict the rest of its children (step 2).",
"A REDUCE action similarly calls the composition operator once the phrase is complete (e.g.",
"step 3).",
"The complete constituent (NP The fox) is the leftmost child of its parent node, thus an NT SW(S) action is done next (step 4).",
"The NT SW(X) action is similar to the NT(X) from the top-down generator, in that it introduces an open nonterminal node and must be matched later by a corresponding REDUCE operation, but, in addition, swaps the two topmost elements at the top of the stack.",
"This is necessary because the parent nonterminal node is not built until after its left-most child has been constructed.",
"In step 1 of Fig.",
"6 , with a single element The on the stack, the action NT SW(NP) adds the open nonterminal symbol NP to become the topmost stack element, but after applying the swap operator the stack now contains (NP | The (step 2).",
"Experiments We optimize the hyper-parameters of each RNNG variant using grid searches based on validation set perplexity.",
"Table 4 summarizes average stack depths and perplexities 17 on the Linzen et al.",
"(2016) validation set.",
"We evaluate each of the variants in terms of number agreement accuracy as an evidence of its suitability to model structural dependencies in English, presented in Table 5 .",
"To account for randomness in training, we report the error rate summary statistics of ten different runs.",
"Avg.",
"(±sdev)/min/max n=2 n=3 n=4 LM 5.8(±0.2)/5.5/6.0 9.6(±0.7)/8.8/10.1 14.1(±1.2)/13.0/15.3 TD 5.5(±0.4)/4.9/5.8 7.8(±0.6)/7.4/8.0 8.9(±1.1)/7.9/9.8 LC 5.4(±0.3)/5.2/5.5 8.2(±0.4)/7.9/8.7 9.9(±1.3)/8.8/11.5 BU 5.7(±0.3) 5.5/5.8 8.5(±0.7)/8.0/9.3 9.7(±1.1)/9.0/11.3 Table 5 : Number agreement error rates for topdown (TD), left-corner (LC), and bottom-up (BU) RNNGs, broken down by the number of attractors.",
"LM indicates the best sequential language model baseline ( §2).",
"We report the mean, standard deviation, and minimum/maximum of 10 different random seeds of each model.",
"Discussion.",
"In Table 5 , we focus on empirical results for cases where the structural dependencies matter the most, corresponding to cases with two, three, and four attractors.",
"All three RNNG variants outperform the sequential LSTM language model baseline for these cases.",
"Nevertheless, the top-down variant outperforms both left-corner and bottom-up strategies for difficult cases with three or more attractors, suggesting that the top-down strategy is most appropriately biased to model difficult number agreement dependencies in English.",
"We run an approximate randomization test by stratifying the output and permuting within each stratum (Yeh, 2000) and find that, for four attractors, the performance difference between the top-down RNNG and the other variants is statistically significant at p < 0.05.",
"The success of the top-down traversal in the domain of number-agreement prediction is consistent with a classical view in parsing that argues top-down parsing is the most human-like parsing strategy since it is the most anticipatory.",
"Only anticipatory representations, it is said, could explain the rapid, incremental processing that humans seem to exhibit (Marslen-Wilson, 1973; Tanenhaus et al., 1995) ; this line of thinking similarly motivates Charniak (2010) , among others.",
"While most work in this domain has been concerned with the parsing problem, our findings suggest that anticipatory mechanisms are also beneficial in capturing structural dependencies in language modeling.",
"We note that our results are achieved using models that, in theory, are able to condition on the entire derivation history, while earlier work in sentence processing has focused on cognitive memory considerations, such as the memory-bounded model of Schuler et al.",
"(2010) .",
"Conclusion Given enough capacity, LSTMs trained on language modeling objectives are able to learn syntax-sensitive dependencies, as evidenced by accurate number agreement accuracy with multiple attractors.",
"Despite this strong performance, we discover explicit modeling of structure does improve the model's ability to discover non-local structural dependencies when determining the distribution over subsequent word generation.",
"Recurrent neural network grammars (RNNGs), which jointly model phrase-structure trees and strings and employ an explicit composition operator, substantially outperform LSTM language models and syntactic language models without explicit compositions; this highlights the importance of a hierarchical inductive bias in capturing structural dependencies.",
"We explore the possibility that how the structure is built affects number agreement performance.",
"Through novel extensions to RNNGs that enable the use of left-corner and bottom-up generation strategies, we discover that this is indeed the case: the three RNNG variants have different generalization properties for number agreement, with the top-down traversal strategy performing best for cases with multiple attractors."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"4.3",
"5"
],
"paper_header_content": [
"Introduction",
"Number Agreement with LSTM Language Models",
"Number Agreement with RNNGs",
"Recurrent Neural Network Grammars",
"Experiments",
"Further Analysis",
"Top-Down, Left-Corner, and Bottom-Up Traversals",
"Bottom-Up Traversal",
"Left-Corner Traversal",
"Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-30#paper-1041#slide-25 | Quick Illustration of the Differences Left Corner | LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | [] |
GEM-SciDuet-train-30#paper-1041#slide-26 | 1041 | LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modeling Structure Makes Them Better | Language exhibits hierarchical structure, but recent work using a subject-verb agreement diagnostic argued that state-ofthe-art language models, LSTMs, fail to learn long-range syntax-sensitive dependencies. Using the same diagnostic, we show that, in fact, LSTMs do succeed in learning such dependencies-provided they have enough capacity. We then explore whether models that have access to explicit syntactic information learn agreement more effectively, and how the way in which this structural information is incorporated into the model impacts performance. We find that the mere presence of syntactic information does not improve accuracy, but when model architecture is determined by syntax, number agreement is improved. Further, we find that the choice of how syntactic structure is built affects how well number agreement is learned: top-down construction outperforms leftcorner and bottom-up variants in capturing long-distance structural dependencies. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189
],
"paper_content_text": [
"Introduction Recurrent neural networks (RNNs) are remarkably effective models of sequential data.",
"Recent years have witnessed the widespread adoption of recurrent architectures such as LSTMs (Hochreiter and Schmidhuber, 1997) in various NLP tasks, with state of the art results in language modeling and conditional generation tasks like machine translation (Bahdanau et al., 2015) and text summarization (See et al., 2017) .",
"Here we revisit the question asked by Linzen et al.",
"(2016) : as RNNs model word sequences without explicit notions of hierarchical structure, Figure 1 : An example of the number agreement task with two attractors and a subject-verb distance of five.",
"to what extent are these models able to learn non-local syntactic dependencies in natural language?",
"Identifying number agreement between subjects and verbs-especially in the presence of attractors-can be understood as a cognitivelymotivated probe that seeks to distinguish hierarchical theories from sequential ones, as models that rely on sequential cues like the most recent noun would favor the incorrect verb form.",
"We provide an example of this task in Fig.",
"1 , where the plural form of the verb have agrees with the distant subject parts, rather than the adjacent attractors (underlined) of the singular form.",
"Contrary to the findings of Linzen et al.",
"(2016) , our experiments suggest that sequential LSTMs are able to capture structural dependencies to a large extent, even for cases with multiple attractors ( §2).",
"Our finding suggests that network capacity plays a crucial role in capturing structural dependencies with multiple attractors.",
"Nevertheless, we find that a strong character LSTM language model-which lacks explicit word representation and has to capture much longer sequential dependencies in order to learn non-local structural dependencies effectively-performs much worse in the number agreement task.",
"Given the strong performance of word-based LSTM language models, are there are any substantial benefits, in terms of number agreement accuracy, to explicitly modeling hierarchical structures as an inductive bias?",
"We discover that a certain class of LSTM language models that explicitly models syntactic structures, the recurrent neural network grammars (Dyer et al., 2016, RN-NGs) , considerably outperforms sequential LSTM language models for cases with multiple attractors ( §3).",
"We present experiments affirming that this gain is due to an explicit composition operator rather than the presence of predicted syntactic annotations.",
"Rather surprisingly, syntactic LSTM language models without explicit composition have no advantage over sequential LSTMs that operate on word sequences, although these models can nevertheless be excellent predictors of phrase structures (Choe and Charniak, 2016) .",
"Having established the importance of modeling structures, we explore the hypothesis that how we build the structure affects the model's ability to identify structural dependencies in English.",
"As RNNGs build phrase-structure trees through top-down operations, we propose extensions to the structure-building sequences and model architecture that enable left-corner (Henderson, 2003 (Henderson, , 2004 and bottom-up (Chelba and Jelinek, 2000; Emami and Jelinek, 2005) generation orders ( §4).",
"Extensive prior work has characterized topdown, left-corner, and bottom-up parsing strategies in terms of cognitive plausibility (Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992) and neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we move away from the realm of parsing and evaluate the three strategies as models of generation instead, and address the following empirical question: which generation order is most appropriately biased to model structural dependencies in English, as indicated by number agreement accuracy?",
"Our key finding is that the top-down generation outperforms left-corner and bottom-up variants for difficult cases with multiple attractors.",
"In theory, the three traversal strategies approximate the same chain rule that decompose the joint probability of words and phrase-structure trees, denoted as p(x, y), differently and as such will impose different biases on the learner.",
"In §4.3, we show that the three variants achieve similar perplexities on a held-out validation set.",
"As we observe different patterns in number agreement, this demonstrates that while perplexity can be a useful diagnostic tool, it may not be sensitive enough for comparing models in terms of how well they capture grammatical intuitions.",
"Number Agreement with LSTM Language Models We revisit the number agreement task with LSTMs trained on language modeling objectives, as proposed by Linzen et al.",
"(2016) .",
"Experimental Settings.",
"We use the same parsed Wikipedia corpus, verb inflectors, preprocessing steps, and dataset split as Linzen et al.",
"(2016) .",
"1 Word types beyond the most frequent 10,000 are converted to their respective POS tags.",
"We summarize the corpus statistics of the dataset, along with the test set distribution of the number of attractors, in Table 1 .",
"Similar to Linzen et al.",
"(2016) , we only include test cases where all intervening nouns are of the opposite number forms than the subject noun.",
"All models are implemented using the DyNet library (Neubig et al., 2017) .",
"Training was done using a language modeling objective that predicts the next word given the prefix; at test time we compute agreement error rates by comparing the probability of the correct verb form with the incorrect one.",
"We report performance of a few different LSTM hidden layer configurations, while other hyper-parameters are selected based on a grid search.",
"2 Following Linzen Table 2 indicates that, given enough capacity, LSTM language models without explicit syntactic supervision are able to perform well in number agreement.",
"For cases with multiple attractors, we observe that the LSTM language model with 50 hidden units trails behind its larger counterparts by a substantial margin despite comparable performance for zero attractor cases, suggesting that network capacity plays an especially important role in propagating relevant structural information across a large number of steps.",
"5 Our experiment independently derives the same finding as the recent work of Gulordava et al.",
"(2018) , who also find that LSTMs trained with language modeling objectives are able to learn number agreement well; here we additionally identify model capacity as one of the reasons for the discrepancy with the Linzen et al.",
"(2016) results.",
"While the pretrained large-scale language model of Jozefowicz et al.",
"(2016) has certain advantages in terms of model capacity, more training data, and richer vocabulary, we suspect that the poorer performance is due to differences between their training domain and the number agreement testing domain, although the model still performs reasonably well in the number agreement test set.",
"Prior work has confirmed the notion that, in many cases, statistical models are able to achieve good performance under some aggregate metric by overfitting to patterns that are predictive in most cases, often at the expense of more difficult, infrequent instances that require deeper language understanding abilities (Rimell et al., 2009; Jia and Liang, 2017) .",
"In the vast majority of cases, structural dependencies between subjects and verbs highly overlap with sequential dependencies (Table 1).",
"Nevertheless, the fact that number agreement accuracy gets worse as the number of attractors increases is consistent with a sequential recency bias in LSTMs: under this conjecture, identifying the correct structural dependency becomes harder when there are more adjacent nouns of different number forms than the true subject.",
"If the sequential recency conjecture is correct, then LSTMs would perform worse when the structural dependency is more distant in the sequences, compared to cases where the structural dependency is more adjacent.",
"We empirically test this conjecture by running a strong character-based LSTM language model of that achieved state of the art results on EnWiki8 from the Hutter Prize dataset (Hutter, 2012) , with 1,800 hidden units and 10 million parameters.",
"The character LSTM is trained, validated, and tested 6 on the same split of the Linzen et al.",
"(2016) number agreement dataset.",
"A priori, we expect that number agreement is harder for character LSTMs for two reasons.",
"First, character LSTMs lack explicit word representaforms much better for cases with multiple attractors.",
"tions, thus succeeding in this task requires identifying structural dependencies between two sequences of character tokens, while word-based LSTMs only need to resolve dependencies between word tokens.",
"Second, by nature of modeling characters, non-local structural dependencies are sequentially further apart than in the wordbased language model.",
"On the other hand, character LSTMs have the ability to exploit and share informative morphological cues, such as the fact that plural nouns in English tend to end with 's'.",
"As demonstrated on the last row of Table 2 , we find that the character LSTM language model performs much worse at number agreement with multiple attractors compared to its word-based counterparts.",
"This finding is consistent with that of Sennrich (2017) , who find that character-level decoders in neural machine translation perform worse than subword models in capturing morphosyntactic agreement.",
"To some extent, our finding demonstrates the limitations that character LSTMs face in learning structure from language modeling objectives, despite earlier evidence that character LSTM language models are able to implicitly acquire a lexicon (Le Godais et al., 2017) .",
"Number Agreement with RNNGs Given the strong performance of sequential LSTMs in number agreement, is there any further benefit to explicitly modeling hierarchical structures?",
"We focus on recurrent neural network grammars (Dyer et al., 2016, RNNGs) , which jointly model the probability of phrase-structure trees and strings, p(x, y), through structurebuilding actions and explicit compositions for representing completed constituents.",
"Our choice of RNNGs is motivated by the findings of Kuncoro et al.",
"(2017) , who find evidence for syntactic headedness in RNNG phrasal representations.",
"Intuitively, the ability to learn heads is beneficial for this task, as the representation for the noun phrase \"The flowers in the vase\" would be similar to the syntactic head flowers rather than vase.",
"In some sense, the composition operator can be understood as injecting a structural recency bias into the model design, as subjects and verbs that are sequentially apart are encouraged to be close together in the RNNGs' representation.",
"Recurrent Neural Network Grammars RNNGs (Dyer et al., 2016) are language models that estimate the joint probability of string terminals and phrase-structure tree nonterminals.",
"Here we use stack-only RNNGs that achieve better perplexity and parsing performance (Kuncoro et al., 2017) .",
"Given the current stack configuration, the objective function of RNNGs is to predict the correct structure-building operation according to a top-down, left-to-right traversal of the phrasestructure tree; a partial traversal for the input sentence \"The flowers in the vase are blooming\" is illustrated in Fig.",
"3(a) .",
"7 The structural inductive bias of RNNGs derives from an explicit composition operator that represents completed constituents; for instance, the constituent (NP The flowers) is represented by a single composite element on the stack, rather than as four separate symbols.",
"During each REDUCE action, the topmost stack elements that belong to the new constituent are popped from the stack and then composed by the composition function; the composed symbol is then pushed back into the stack.",
"The model is trained in an end-to-end manner by minimizing the cross-entropy loss relative to a sample of gold trees.",
"Experiments Here we summarize the experimental settings of running RNNGs on the number agreement dataset and discuss the empirical findings.",
"Experimental settings.",
"We obtain phrasestructure trees for the Linzen et al.",
"(2016) dataset using a publicly available discriminative model 8 trained on the Penn Treebank (Marcus et al., 1993) .",
"At training time, we use these predicted trees to derive action sequences on the training set, and train the RNNG model on these sequences.",
"9 At test time, we compare the probabilities of the correct and incorrect verb forms given the prefix, which now includes both nonterminal and terminal symbols.",
"An example of the stack contents (i.e.",
"the prefix) when predicting the verb is provided in Fig.",
"3(a) .",
"We similarly run a grid search over the same hyper-parameter range as the sequential LSTM and compare the results with the strongest sequential LSTM baseline from §2.",
"Figure 2 : Number agreement error rates for sequential LSTM language models (left), sequential syntactic LSTM language models (Choe and Charniak, 2016, center), and RNNGs (right).",
"Discussion.",
"Fig.",
"2 shows that RNNGs (rightmost) achieve much better number agreement accuracy compared to LSTM language models (leftmost) for difficult cases with four and five attractors, with around 30% error rate reductions, along with a 13% error rate reduction (from 9% to 7.8%) for three attractors.",
"We attribute the slightly worse performance of RNNGs on cases with zero and one attractor to the presence of intervening structure-building actions that separate the subject and the verb, such as a REDUCE (step 6 in Fig.",
"3(a) ) action to complete the noun phrase and at least one action to predict a verb phrase (step 15 in Fig.",
"3(a) ) before the verb itself is introduced, while LSTM language models benefit from shorter dependencies for zero and one attractor cases.",
"The performance gain of RNNGs might arise from two potential causes.",
"First, RNNGs have access to predicted syntactic annotations, while LSTM language models operate solely on word sequences.",
"Second, RNNGs incorporate explicit compositions, which encourage hierarhical representations and potentially the discovery of syntactic (rather than sequential) dependencies.",
"Would LSTMs that have access to syntactic annotations, but without the explicit composition function, benefit from the same performance gain as RNNGs?",
"To answer this question, we run sequential LSTMs over the same phrase-structure trees (Choe and Charniak, 2016) , similarly estimating the joint probability of phrase-structure nonterminals and string terminals but without an explicit composition operator.",
"Taking the example in Fig.",
"3(a) , the sequential syntactic LSTM would have fifteen 10 symbols on the LSTM when predicting the verb, as opposed to three symbols in the case of RNNGs' stack LSTM.",
"In theory, the sequential LSTM over the phrase-structure trees (Choe and Charniak, 2016) may be able to incorporate a similar, albeit implicit, composition process as RNNGs and consequently derive similarly syntactic heads, although there is no inductive bias that explicitly encourages such process.",
"Fig.",
"2 suggests that the sequential syntactic LSTMs (center) perform comparably with sequential LSTMs without syntax for multiple attractor cases, and worse than RNNGs for nearly all attractors; the gap is highest for multiple attractors.",
"This result showcases the importance of an explicit composition operator and hierarchical representations in identifying structural dependencies, as indicated by number agreement accuracy.",
"Our finding is consistent with the recent work of Yogatama et al.",
"(2018) , who find that introducing elements of hierarchical modeling through a stackstructured memory is beneficial for number agreement, outperforming LSTM language models and attention-augmented variants by increasing margins as the number of attractor grows.",
"Further Analysis In order to better interpret the results, we conduct further analysis into the perplexities of each model, followed by a discussion on the effect of incrementality constraints on the RNNG when predicting number agreement.",
"Perplexity.",
"To what extent does the success of RNNGs in the number agreement task with multiple attractors correlate with better performance under the perplexity metric?",
"We answer this question by using an importance sampling marginalization procedure (Dyer et al., 2016) to obtain an estimate of p(x) under both RNNGs and the sequential syntactic LSTM model.",
"Following Dyer et al.",
"(2016) , for each sentence on the validation set we sample 100 candidate trees from a discriminative model 11 as our proposal distribution.",
"As demonstrated in Table 3 , the LSTM language model has the lowest validation set perplexity despite substantially worse performance than RN-NGs in number agreement with multiple attractors, suggesting that lower perplexity is not neces-sarily correlated with number agreement success.",
"Incrementality constraints.",
"As the syntactic prefix was derived from a discriminative model that has access to unprocessed words, one potential concern is that this prefix might violate the incrementality constraints and benefit the RNNG over the LSTM language model.",
"To address this concern, we remark that the empirical evidence from Fig.",
"2 and Table 3 indicates that the LSTM language model without syntactic annotation outperforms the sequential LSTM with syntactic annotation in terms of both perplexity and number agreement throughout nearly all attractor settings, suggesting that the predicted syntactic prefix does not give any unfair advantage to the syntactic models.",
"Furthermore, we run an experiment where the syntactic prefix is instead derived from an incremental beam search procedure of Fried et al.",
"(2017) .",
"12 To this end, we take the highest scoring beam entry at the time that the verb is generated to be the syntactic prefix; this procedure is applied to both the correct and incorrect verb forms.",
"13 We then similarly compare the probabilities of the correct and incorrect verb form given each respective syntactic prefix to obtain number agreement accuracy.",
"Our finding suggests that using the fully incremental tree prefix leads to even better RNNG number agreement performance for four and five attractors, achieving 7.1% and 8.2% error rates, respectively, compared to 9.4% and 12% for the RNNG error rates in Fig.",
"2 .",
"Top-Down, Left-Corner, and Bottom-Up Traversals In this section, we propose two new variants of RNNGs that construct trees using a different con- 12 As the beam search procedure is time-consuming, we randomly sample 500 cases for each attractor and compute the number agreement accuracy on these samples.",
"13 Consequently, the correct and incorrect forms of the sentence might have different partial trees, as the highest scoring beam entries may be different for each alternative.",
"struction order than the top-down, left-to-right order used above.",
"These are a bottom-up construction order ( §4.1) and a left-corner construction order ( §4.2), analogous to the well-known parsing strategies (e.g.",
"Hale, 2014, chapter 3).",
"They differ from these classic strategies insofar as they do not announce the phrase-structural content of an entire branch at the same time, adopting instead a node-by-node enumeration reminescent of Markov Grammars (Charniak, 1997) .",
"This stepby-step arrangement extends to the derived string as well; since all variants generate words from left to right, the models can be compared using number agreement as a diagnostic.",
"14 Here we state our hypothesis on why the build order matters.",
"The three generation strategies represent different chain rule decompositions of the joint probability of strings and phrase-structure trees, thereby imposing different biases on the learner.",
"Earlier work in parsing has characterized the plausibility of top-down, left-corner, and bottom-up strategies as viable candidates of human sentence processing, especially in terms of memory constraints and human difficulties with center embedding constructions (Johnson-Laird, 1983; Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992 , inter alia), along with neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we cast the three strategies as models of language generation (Manning and Carpenter, 1997), and focus on the empirical question: which generation order has the most appropriate bias in modeling non-local structural dependencies in English?",
"These alternative orders organize the learning problem so as to yield intermediate states in generation that condition on different aspects of the grammatical structure.",
"In number agreement, this amounts to making an agreement controller, such as the word flowers in Fig.",
"3 , more or less salient.",
"If it is more salient, the model should be better-able to inflect the main verb in agreement with this controller, without getting distracted by the attractors.",
"The three proposed build orders are compared in Fig.",
"3 , showing the respective configurations (i.e.",
"the prefix) when generating the main verb in a sentence with a single attractor.",
"15 In ad-dition, we show concrete action sequences for a simpler sentence in each section.",
"Bottom-Up Traversal In bottom-up traversals, phrases are recursively constructed and labeled with the nonterminal type once all their daughters have been built, as illustrated in Fig.",
"4 .",
"Bottom-up traversals benefit from shorter stack depths compared to top-down due to the lack of incomplete nonterminals.",
"As the commitment to label the nonterminal type of a phrase is delayed until its constituents are complete, this means that the generation of a child node cannot condition on the label of its parent node.",
"In n-ary branching trees, bottom-up completion of constituents requires a procedure for determining how many of the most recent elements on the stack should be daughters of the node that is being constructed.",
"16 Conceptually, rather than having a single REDUCE operation as we have before, we have a complex REDUCE(X, n) operation that must determine the type of the constituent (i.e., X) as well as the number of daughters (i.e., n).",
"In step 5 of Fig.",
"4 , the newly formed NP constituent only covers the terminal worms, and neither the unattached terminal eats nor the constituent (NP The fox) is part of the new noun phrase.",
"We implement this extent decision using a stick-breaking construction-using the stack LSTM encoding, a single-layer feedforward network, and a logistic output layer-which decides whether the top element on the stack should be the leftmost child of the new constituent (i.e.",
"whether or not the new constituent is complete after popping the current topmost stack element), as illustrated in Fig.",
"5 .",
"If not, the process is then repeated after the topmost stack element is popped.",
"Once the extent of the new nonterminal has been decided, we parameterize the decision of the nonterminal label type; in Fig.",
"5 this is an NP.",
"A second difference to top-down generation is that when a single constituent remains on the stack, the sentence is not necessarily complete (see step 3 of Fig.",
"4 for examples where this happens).",
"We thus introduce an explicit STOP action (step 8, Fig.",
"4) , indicating the tree is complete, which is only assigned non-zero probability when the stack has a during the history of the full generation process vary considerably in the invariances and the kinds of actions they predict.",
"16 This mechanism is not necessary with strictly binary branching trees, since each new nonterminal always consists of the two children at the top of the stack.",
"Left-Corner Traversal Left-corner traversals combine some aspects of top-down and bottom-up processing.",
"As illustrated in Fig.",
"6 , this works by first generating the leftmost terminal of the tree, The (step 0), before proceeding bottom-up to predict its parent NP (step 1) and then top-down to predict the rest of its children (step 2).",
"A REDUCE action similarly calls the composition operator once the phrase is complete (e.g.",
"step 3).",
"The complete constituent (NP The fox) is the leftmost child of its parent node, thus an NT SW(S) action is done next (step 4).",
"The NT SW(X) action is similar to the NT(X) from the top-down generator, in that it introduces an open nonterminal node and must be matched later by a corresponding REDUCE operation, but, in addition, swaps the two topmost elements at the top of the stack.",
"This is necessary because the parent nonterminal node is not built until after its left-most child has been constructed.",
"In step 1 of Fig.",
"6 , with a single element The on the stack, the action NT SW(NP) adds the open nonterminal symbol NP to become the topmost stack element, but after applying the swap operator the stack now contains (NP | The (step 2).",
"Experiments We optimize the hyper-parameters of each RNNG variant using grid searches based on validation set perplexity.",
"Table 4 summarizes average stack depths and perplexities 17 on the Linzen et al.",
"(2016) validation set.",
"We evaluate each of the variants in terms of number agreement accuracy as an evidence of its suitability to model structural dependencies in English, presented in Table 5 .",
"To account for randomness in training, we report the error rate summary statistics of ten different runs.",
"Avg.",
"(±sdev)/min/max n=2 n=3 n=4 LM 5.8(±0.2)/5.5/6.0 9.6(±0.7)/8.8/10.1 14.1(±1.2)/13.0/15.3 TD 5.5(±0.4)/4.9/5.8 7.8(±0.6)/7.4/8.0 8.9(±1.1)/7.9/9.8 LC 5.4(±0.3)/5.2/5.5 8.2(±0.4)/7.9/8.7 9.9(±1.3)/8.8/11.5 BU 5.7(±0.3) 5.5/5.8 8.5(±0.7)/8.0/9.3 9.7(±1.1)/9.0/11.3 Table 5 : Number agreement error rates for topdown (TD), left-corner (LC), and bottom-up (BU) RNNGs, broken down by the number of attractors.",
"LM indicates the best sequential language model baseline ( §2).",
"We report the mean, standard deviation, and minimum/maximum of 10 different random seeds of each model.",
"Discussion.",
"In Table 5 , we focus on empirical results for cases where the structural dependencies matter the most, corresponding to cases with two, three, and four attractors.",
"All three RNNG variants outperform the sequential LSTM language model baseline for these cases.",
"Nevertheless, the top-down variant outperforms both left-corner and bottom-up strategies for difficult cases with three or more attractors, suggesting that the top-down strategy is most appropriately biased to model difficult number agreement dependencies in English.",
"We run an approximate randomization test by stratifying the output and permuting within each stratum (Yeh, 2000) and find that, for four attractors, the performance difference between the top-down RNNG and the other variants is statistically significant at p < 0.05.",
"The success of the top-down traversal in the domain of number-agreement prediction is consistent with a classical view in parsing that argues top-down parsing is the most human-like parsing strategy since it is the most anticipatory.",
"Only anticipatory representations, it is said, could explain the rapid, incremental processing that humans seem to exhibit (Marslen-Wilson, 1973; Tanenhaus et al., 1995) ; this line of thinking similarly motivates Charniak (2010) , among others.",
"While most work in this domain has been concerned with the parsing problem, our findings suggest that anticipatory mechanisms are also beneficial in capturing structural dependencies in language modeling.",
"We note that our results are achieved using models that, in theory, are able to condition on the entire derivation history, while earlier work in sentence processing has focused on cognitive memory considerations, such as the memory-bounded model of Schuler et al.",
"(2010) .",
"Conclusion Given enough capacity, LSTMs trained on language modeling objectives are able to learn syntax-sensitive dependencies, as evidenced by accurate number agreement accuracy with multiple attractors.",
"Despite this strong performance, we discover explicit modeling of structure does improve the model's ability to discover non-local structural dependencies when determining the distribution over subsequent word generation.",
"Recurrent neural network grammars (RNNGs), which jointly model phrase-structure trees and strings and employ an explicit composition operator, substantially outperform LSTM language models and syntactic language models without explicit compositions; this highlights the importance of a hierarchical inductive bias in capturing structural dependencies.",
"We explore the possibility that how the structure is built affects number agreement performance.",
"Through novel extensions to RNNGs that enable the use of left-corner and bottom-up generation strategies, we discover that this is indeed the case: the three RNNG variants have different generalization properties for number agreement, with the top-down traversal strategy performing best for cases with multiple attractors."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"4.3",
"5"
],
"paper_header_content": [
"Introduction",
"Number Agreement with LSTM Language Models",
"Number Agreement with RNNGs",
"Recurrent Neural Network Grammars",
"Experiments",
"Further Analysis",
"Top-Down, Left-Corner, and Bottom-Up Traversals",
"Bottom-Up Traversal",
"Left-Corner Traversal",
"Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-30#paper-1041#slide-26 | Quick Illustration of the Differences Bottom Up | LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | [] |
GEM-SciDuet-train-30#paper-1041#slide-27 | 1041 | LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modeling Structure Makes Them Better | Language exhibits hierarchical structure, but recent work using a subject-verb agreement diagnostic argued that state-ofthe-art language models, LSTMs, fail to learn long-range syntax-sensitive dependencies. Using the same diagnostic, we show that, in fact, LSTMs do succeed in learning such dependencies-provided they have enough capacity. We then explore whether models that have access to explicit syntactic information learn agreement more effectively, and how the way in which this structural information is incorporated into the model impacts performance. We find that the mere presence of syntactic information does not improve accuracy, but when model architecture is determined by syntax, number agreement is improved. Further, we find that the choice of how syntactic structure is built affects how well number agreement is learned: top-down construction outperforms leftcorner and bottom-up variants in capturing long-distance structural dependencies. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189
],
"paper_content_text": [
"Introduction Recurrent neural networks (RNNs) are remarkably effective models of sequential data.",
"Recent years have witnessed the widespread adoption of recurrent architectures such as LSTMs (Hochreiter and Schmidhuber, 1997) in various NLP tasks, with state of the art results in language modeling and conditional generation tasks like machine translation (Bahdanau et al., 2015) and text summarization (See et al., 2017) .",
"Here we revisit the question asked by Linzen et al.",
"(2016) : as RNNs model word sequences without explicit notions of hierarchical structure, Figure 1 : An example of the number agreement task with two attractors and a subject-verb distance of five.",
"to what extent are these models able to learn non-local syntactic dependencies in natural language?",
"Identifying number agreement between subjects and verbs-especially in the presence of attractors-can be understood as a cognitivelymotivated probe that seeks to distinguish hierarchical theories from sequential ones, as models that rely on sequential cues like the most recent noun would favor the incorrect verb form.",
"We provide an example of this task in Fig.",
"1 , where the plural form of the verb have agrees with the distant subject parts, rather than the adjacent attractors (underlined) of the singular form.",
"Contrary to the findings of Linzen et al.",
"(2016) , our experiments suggest that sequential LSTMs are able to capture structural dependencies to a large extent, even for cases with multiple attractors ( §2).",
"Our finding suggests that network capacity plays a crucial role in capturing structural dependencies with multiple attractors.",
"Nevertheless, we find that a strong character LSTM language model-which lacks explicit word representation and has to capture much longer sequential dependencies in order to learn non-local structural dependencies effectively-performs much worse in the number agreement task.",
"Given the strong performance of word-based LSTM language models, are there are any substantial benefits, in terms of number agreement accuracy, to explicitly modeling hierarchical structures as an inductive bias?",
"We discover that a certain class of LSTM language models that explicitly models syntactic structures, the recurrent neural network grammars (Dyer et al., 2016, RN-NGs) , considerably outperforms sequential LSTM language models for cases with multiple attractors ( §3).",
"We present experiments affirming that this gain is due to an explicit composition operator rather than the presence of predicted syntactic annotations.",
"Rather surprisingly, syntactic LSTM language models without explicit composition have no advantage over sequential LSTMs that operate on word sequences, although these models can nevertheless be excellent predictors of phrase structures (Choe and Charniak, 2016) .",
"Having established the importance of modeling structures, we explore the hypothesis that how we build the structure affects the model's ability to identify structural dependencies in English.",
"As RNNGs build phrase-structure trees through top-down operations, we propose extensions to the structure-building sequences and model architecture that enable left-corner (Henderson, 2003 (Henderson, , 2004 and bottom-up (Chelba and Jelinek, 2000; Emami and Jelinek, 2005) generation orders ( §4).",
"Extensive prior work has characterized topdown, left-corner, and bottom-up parsing strategies in terms of cognitive plausibility (Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992) and neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we move away from the realm of parsing and evaluate the three strategies as models of generation instead, and address the following empirical question: which generation order is most appropriately biased to model structural dependencies in English, as indicated by number agreement accuracy?",
"Our key finding is that the top-down generation outperforms left-corner and bottom-up variants for difficult cases with multiple attractors.",
"In theory, the three traversal strategies approximate the same chain rule that decompose the joint probability of words and phrase-structure trees, denoted as p(x, y), differently and as such will impose different biases on the learner.",
"In §4.3, we show that the three variants achieve similar perplexities on a held-out validation set.",
"As we observe different patterns in number agreement, this demonstrates that while perplexity can be a useful diagnostic tool, it may not be sensitive enough for comparing models in terms of how well they capture grammatical intuitions.",
"Number Agreement with LSTM Language Models We revisit the number agreement task with LSTMs trained on language modeling objectives, as proposed by Linzen et al.",
"(2016) .",
"Experimental Settings.",
"We use the same parsed Wikipedia corpus, verb inflectors, preprocessing steps, and dataset split as Linzen et al.",
"(2016) .",
"1 Word types beyond the most frequent 10,000 are converted to their respective POS tags.",
"We summarize the corpus statistics of the dataset, along with the test set distribution of the number of attractors, in Table 1 .",
"Similar to Linzen et al.",
"(2016) , we only include test cases where all intervening nouns are of the opposite number forms than the subject noun.",
"All models are implemented using the DyNet library (Neubig et al., 2017) .",
"Training was done using a language modeling objective that predicts the next word given the prefix; at test time we compute agreement error rates by comparing the probability of the correct verb form with the incorrect one.",
"We report performance of a few different LSTM hidden layer configurations, while other hyper-parameters are selected based on a grid search.",
"2 Following Linzen Table 2 indicates that, given enough capacity, LSTM language models without explicit syntactic supervision are able to perform well in number agreement.",
"For cases with multiple attractors, we observe that the LSTM language model with 50 hidden units trails behind its larger counterparts by a substantial margin despite comparable performance for zero attractor cases, suggesting that network capacity plays an especially important role in propagating relevant structural information across a large number of steps.",
"5 Our experiment independently derives the same finding as the recent work of Gulordava et al.",
"(2018) , who also find that LSTMs trained with language modeling objectives are able to learn number agreement well; here we additionally identify model capacity as one of the reasons for the discrepancy with the Linzen et al.",
"(2016) results.",
"While the pretrained large-scale language model of Jozefowicz et al.",
"(2016) has certain advantages in terms of model capacity, more training data, and richer vocabulary, we suspect that the poorer performance is due to differences between their training domain and the number agreement testing domain, although the model still performs reasonably well in the number agreement test set.",
"Prior work has confirmed the notion that, in many cases, statistical models are able to achieve good performance under some aggregate metric by overfitting to patterns that are predictive in most cases, often at the expense of more difficult, infrequent instances that require deeper language understanding abilities (Rimell et al., 2009; Jia and Liang, 2017) .",
"In the vast majority of cases, structural dependencies between subjects and verbs highly overlap with sequential dependencies (Table 1).",
"Nevertheless, the fact that number agreement accuracy gets worse as the number of attractors increases is consistent with a sequential recency bias in LSTMs: under this conjecture, identifying the correct structural dependency becomes harder when there are more adjacent nouns of different number forms than the true subject.",
"If the sequential recency conjecture is correct, then LSTMs would perform worse when the structural dependency is more distant in the sequences, compared to cases where the structural dependency is more adjacent.",
"We empirically test this conjecture by running a strong character-based LSTM language model of that achieved state of the art results on EnWiki8 from the Hutter Prize dataset (Hutter, 2012) , with 1,800 hidden units and 10 million parameters.",
"The character LSTM is trained, validated, and tested 6 on the same split of the Linzen et al.",
"(2016) number agreement dataset.",
"A priori, we expect that number agreement is harder for character LSTMs for two reasons.",
"First, character LSTMs lack explicit word representaforms much better for cases with multiple attractors.",
"tions, thus succeeding in this task requires identifying structural dependencies between two sequences of character tokens, while word-based LSTMs only need to resolve dependencies between word tokens.",
"Second, by nature of modeling characters, non-local structural dependencies are sequentially further apart than in the wordbased language model.",
"On the other hand, character LSTMs have the ability to exploit and share informative morphological cues, such as the fact that plural nouns in English tend to end with 's'.",
"As demonstrated on the last row of Table 2 , we find that the character LSTM language model performs much worse at number agreement with multiple attractors compared to its word-based counterparts.",
"This finding is consistent with that of Sennrich (2017) , who find that character-level decoders in neural machine translation perform worse than subword models in capturing morphosyntactic agreement.",
"To some extent, our finding demonstrates the limitations that character LSTMs face in learning structure from language modeling objectives, despite earlier evidence that character LSTM language models are able to implicitly acquire a lexicon (Le Godais et al., 2017) .",
"Number Agreement with RNNGs Given the strong performance of sequential LSTMs in number agreement, is there any further benefit to explicitly modeling hierarchical structures?",
"We focus on recurrent neural network grammars (Dyer et al., 2016, RNNGs) , which jointly model the probability of phrase-structure trees and strings, p(x, y), through structurebuilding actions and explicit compositions for representing completed constituents.",
"Our choice of RNNGs is motivated by the findings of Kuncoro et al.",
"(2017) , who find evidence for syntactic headedness in RNNG phrasal representations.",
"Intuitively, the ability to learn heads is beneficial for this task, as the representation for the noun phrase \"The flowers in the vase\" would be similar to the syntactic head flowers rather than vase.",
"In some sense, the composition operator can be understood as injecting a structural recency bias into the model design, as subjects and verbs that are sequentially apart are encouraged to be close together in the RNNGs' representation.",
"Recurrent Neural Network Grammars RNNGs (Dyer et al., 2016) are language models that estimate the joint probability of string terminals and phrase-structure tree nonterminals.",
"Here we use stack-only RNNGs that achieve better perplexity and parsing performance (Kuncoro et al., 2017) .",
"Given the current stack configuration, the objective function of RNNGs is to predict the correct structure-building operation according to a top-down, left-to-right traversal of the phrasestructure tree; a partial traversal for the input sentence \"The flowers in the vase are blooming\" is illustrated in Fig.",
"3(a) .",
"7 The structural inductive bias of RNNGs derives from an explicit composition operator that represents completed constituents; for instance, the constituent (NP The flowers) is represented by a single composite element on the stack, rather than as four separate symbols.",
"During each REDUCE action, the topmost stack elements that belong to the new constituent are popped from the stack and then composed by the composition function; the composed symbol is then pushed back into the stack.",
"The model is trained in an end-to-end manner by minimizing the cross-entropy loss relative to a sample of gold trees.",
"Experiments Here we summarize the experimental settings of running RNNGs on the number agreement dataset and discuss the empirical findings.",
"Experimental settings.",
"We obtain phrasestructure trees for the Linzen et al.",
"(2016) dataset using a publicly available discriminative model 8 trained on the Penn Treebank (Marcus et al., 1993) .",
"At training time, we use these predicted trees to derive action sequences on the training set, and train the RNNG model on these sequences.",
"9 At test time, we compare the probabilities of the correct and incorrect verb forms given the prefix, which now includes both nonterminal and terminal symbols.",
"An example of the stack contents (i.e.",
"the prefix) when predicting the verb is provided in Fig.",
"3(a) .",
"We similarly run a grid search over the same hyper-parameter range as the sequential LSTM and compare the results with the strongest sequential LSTM baseline from §2.",
"Figure 2 : Number agreement error rates for sequential LSTM language models (left), sequential syntactic LSTM language models (Choe and Charniak, 2016, center), and RNNGs (right).",
"Discussion.",
"Fig.",
"2 shows that RNNGs (rightmost) achieve much better number agreement accuracy compared to LSTM language models (leftmost) for difficult cases with four and five attractors, with around 30% error rate reductions, along with a 13% error rate reduction (from 9% to 7.8%) for three attractors.",
"We attribute the slightly worse performance of RNNGs on cases with zero and one attractor to the presence of intervening structure-building actions that separate the subject and the verb, such as a REDUCE (step 6 in Fig.",
"3(a) ) action to complete the noun phrase and at least one action to predict a verb phrase (step 15 in Fig.",
"3(a) ) before the verb itself is introduced, while LSTM language models benefit from shorter dependencies for zero and one attractor cases.",
"The performance gain of RNNGs might arise from two potential causes.",
"First, RNNGs have access to predicted syntactic annotations, while LSTM language models operate solely on word sequences.",
"Second, RNNGs incorporate explicit compositions, which encourage hierarhical representations and potentially the discovery of syntactic (rather than sequential) dependencies.",
"Would LSTMs that have access to syntactic annotations, but without the explicit composition function, benefit from the same performance gain as RNNGs?",
"To answer this question, we run sequential LSTMs over the same phrase-structure trees (Choe and Charniak, 2016) , similarly estimating the joint probability of phrase-structure nonterminals and string terminals but without an explicit composition operator.",
"Taking the example in Fig.",
"3(a) , the sequential syntactic LSTM would have fifteen 10 symbols on the LSTM when predicting the verb, as opposed to three symbols in the case of RNNGs' stack LSTM.",
"In theory, the sequential LSTM over the phrase-structure trees (Choe and Charniak, 2016) may be able to incorporate a similar, albeit implicit, composition process as RNNGs and consequently derive similarly syntactic heads, although there is no inductive bias that explicitly encourages such process.",
"Fig.",
"2 suggests that the sequential syntactic LSTMs (center) perform comparably with sequential LSTMs without syntax for multiple attractor cases, and worse than RNNGs for nearly all attractors; the gap is highest for multiple attractors.",
"This result showcases the importance of an explicit composition operator and hierarchical representations in identifying structural dependencies, as indicated by number agreement accuracy.",
"Our finding is consistent with the recent work of Yogatama et al.",
"(2018) , who find that introducing elements of hierarchical modeling through a stackstructured memory is beneficial for number agreement, outperforming LSTM language models and attention-augmented variants by increasing margins as the number of attractor grows.",
"Further Analysis In order to better interpret the results, we conduct further analysis into the perplexities of each model, followed by a discussion on the effect of incrementality constraints on the RNNG when predicting number agreement.",
"Perplexity.",
"To what extent does the success of RNNGs in the number agreement task with multiple attractors correlate with better performance under the perplexity metric?",
"We answer this question by using an importance sampling marginalization procedure (Dyer et al., 2016) to obtain an estimate of p(x) under both RNNGs and the sequential syntactic LSTM model.",
"Following Dyer et al.",
"(2016) , for each sentence on the validation set we sample 100 candidate trees from a discriminative model 11 as our proposal distribution.",
"As demonstrated in Table 3 , the LSTM language model has the lowest validation set perplexity despite substantially worse performance than RN-NGs in number agreement with multiple attractors, suggesting that lower perplexity is not neces-sarily correlated with number agreement success.",
"Incrementality constraints.",
"As the syntactic prefix was derived from a discriminative model that has access to unprocessed words, one potential concern is that this prefix might violate the incrementality constraints and benefit the RNNG over the LSTM language model.",
"To address this concern, we remark that the empirical evidence from Fig.",
"2 and Table 3 indicates that the LSTM language model without syntactic annotation outperforms the sequential LSTM with syntactic annotation in terms of both perplexity and number agreement throughout nearly all attractor settings, suggesting that the predicted syntactic prefix does not give any unfair advantage to the syntactic models.",
"Furthermore, we run an experiment where the syntactic prefix is instead derived from an incremental beam search procedure of Fried et al.",
"(2017) .",
"12 To this end, we take the highest scoring beam entry at the time that the verb is generated to be the syntactic prefix; this procedure is applied to both the correct and incorrect verb forms.",
"13 We then similarly compare the probabilities of the correct and incorrect verb form given each respective syntactic prefix to obtain number agreement accuracy.",
"Our finding suggests that using the fully incremental tree prefix leads to even better RNNG number agreement performance for four and five attractors, achieving 7.1% and 8.2% error rates, respectively, compared to 9.4% and 12% for the RNNG error rates in Fig.",
"2 .",
"Top-Down, Left-Corner, and Bottom-Up Traversals In this section, we propose two new variants of RNNGs that construct trees using a different con- 12 As the beam search procedure is time-consuming, we randomly sample 500 cases for each attractor and compute the number agreement accuracy on these samples.",
"13 Consequently, the correct and incorrect forms of the sentence might have different partial trees, as the highest scoring beam entries may be different for each alternative.",
"struction order than the top-down, left-to-right order used above.",
"These are a bottom-up construction order ( §4.1) and a left-corner construction order ( §4.2), analogous to the well-known parsing strategies (e.g.",
"Hale, 2014, chapter 3).",
"They differ from these classic strategies insofar as they do not announce the phrase-structural content of an entire branch at the same time, adopting instead a node-by-node enumeration reminescent of Markov Grammars (Charniak, 1997) .",
"This stepby-step arrangement extends to the derived string as well; since all variants generate words from left to right, the models can be compared using number agreement as a diagnostic.",
"14 Here we state our hypothesis on why the build order matters.",
"The three generation strategies represent different chain rule decompositions of the joint probability of strings and phrase-structure trees, thereby imposing different biases on the learner.",
"Earlier work in parsing has characterized the plausibility of top-down, left-corner, and bottom-up strategies as viable candidates of human sentence processing, especially in terms of memory constraints and human difficulties with center embedding constructions (Johnson-Laird, 1983; Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992 , inter alia), along with neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we cast the three strategies as models of language generation (Manning and Carpenter, 1997), and focus on the empirical question: which generation order has the most appropriate bias in modeling non-local structural dependencies in English?",
"These alternative orders organize the learning problem so as to yield intermediate states in generation that condition on different aspects of the grammatical structure.",
"In number agreement, this amounts to making an agreement controller, such as the word flowers in Fig.",
"3 , more or less salient.",
"If it is more salient, the model should be better-able to inflect the main verb in agreement with this controller, without getting distracted by the attractors.",
"The three proposed build orders are compared in Fig.",
"3 , showing the respective configurations (i.e.",
"the prefix) when generating the main verb in a sentence with a single attractor.",
"15 In ad-dition, we show concrete action sequences for a simpler sentence in each section.",
"Bottom-Up Traversal In bottom-up traversals, phrases are recursively constructed and labeled with the nonterminal type once all their daughters have been built, as illustrated in Fig.",
"4 .",
"Bottom-up traversals benefit from shorter stack depths compared to top-down due to the lack of incomplete nonterminals.",
"As the commitment to label the nonterminal type of a phrase is delayed until its constituents are complete, this means that the generation of a child node cannot condition on the label of its parent node.",
"In n-ary branching trees, bottom-up completion of constituents requires a procedure for determining how many of the most recent elements on the stack should be daughters of the node that is being constructed.",
"16 Conceptually, rather than having a single REDUCE operation as we have before, we have a complex REDUCE(X, n) operation that must determine the type of the constituent (i.e., X) as well as the number of daughters (i.e., n).",
"In step 5 of Fig.",
"4 , the newly formed NP constituent only covers the terminal worms, and neither the unattached terminal eats nor the constituent (NP The fox) is part of the new noun phrase.",
"We implement this extent decision using a stick-breaking construction-using the stack LSTM encoding, a single-layer feedforward network, and a logistic output layer-which decides whether the top element on the stack should be the leftmost child of the new constituent (i.e.",
"whether or not the new constituent is complete after popping the current topmost stack element), as illustrated in Fig.",
"5 .",
"If not, the process is then repeated after the topmost stack element is popped.",
"Once the extent of the new nonterminal has been decided, we parameterize the decision of the nonterminal label type; in Fig.",
"5 this is an NP.",
"A second difference to top-down generation is that when a single constituent remains on the stack, the sentence is not necessarily complete (see step 3 of Fig.",
"4 for examples where this happens).",
"We thus introduce an explicit STOP action (step 8, Fig.",
"4) , indicating the tree is complete, which is only assigned non-zero probability when the stack has a during the history of the full generation process vary considerably in the invariances and the kinds of actions they predict.",
"16 This mechanism is not necessary with strictly binary branching trees, since each new nonterminal always consists of the two children at the top of the stack.",
"Left-Corner Traversal Left-corner traversals combine some aspects of top-down and bottom-up processing.",
"As illustrated in Fig.",
"6 , this works by first generating the leftmost terminal of the tree, The (step 0), before proceeding bottom-up to predict its parent NP (step 1) and then top-down to predict the rest of its children (step 2).",
"A REDUCE action similarly calls the composition operator once the phrase is complete (e.g.",
"step 3).",
"The complete constituent (NP The fox) is the leftmost child of its parent node, thus an NT SW(S) action is done next (step 4).",
"The NT SW(X) action is similar to the NT(X) from the top-down generator, in that it introduces an open nonterminal node and must be matched later by a corresponding REDUCE operation, but, in addition, swaps the two topmost elements at the top of the stack.",
"This is necessary because the parent nonterminal node is not built until after its left-most child has been constructed.",
"In step 1 of Fig.",
"6 , with a single element The on the stack, the action NT SW(NP) adds the open nonterminal symbol NP to become the topmost stack element, but after applying the swap operator the stack now contains (NP | The (step 2).",
"Experiments We optimize the hyper-parameters of each RNNG variant using grid searches based on validation set perplexity.",
"Table 4 summarizes average stack depths and perplexities 17 on the Linzen et al.",
"(2016) validation set.",
"We evaluate each of the variants in terms of number agreement accuracy as an evidence of its suitability to model structural dependencies in English, presented in Table 5 .",
"To account for randomness in training, we report the error rate summary statistics of ten different runs.",
"Avg.",
"(±sdev)/min/max n=2 n=3 n=4 LM 5.8(±0.2)/5.5/6.0 9.6(±0.7)/8.8/10.1 14.1(±1.2)/13.0/15.3 TD 5.5(±0.4)/4.9/5.8 7.8(±0.6)/7.4/8.0 8.9(±1.1)/7.9/9.8 LC 5.4(±0.3)/5.2/5.5 8.2(±0.4)/7.9/8.7 9.9(±1.3)/8.8/11.5 BU 5.7(±0.3) 5.5/5.8 8.5(±0.7)/8.0/9.3 9.7(±1.1)/9.0/11.3 Table 5 : Number agreement error rates for topdown (TD), left-corner (LC), and bottom-up (BU) RNNGs, broken down by the number of attractors.",
"LM indicates the best sequential language model baseline ( §2).",
"We report the mean, standard deviation, and minimum/maximum of 10 different random seeds of each model.",
"Discussion.",
"In Table 5 , we focus on empirical results for cases where the structural dependencies matter the most, corresponding to cases with two, three, and four attractors.",
"All three RNNG variants outperform the sequential LSTM language model baseline for these cases.",
"Nevertheless, the top-down variant outperforms both left-corner and bottom-up strategies for difficult cases with three or more attractors, suggesting that the top-down strategy is most appropriately biased to model difficult number agreement dependencies in English.",
"We run an approximate randomization test by stratifying the output and permuting within each stratum (Yeh, 2000) and find that, for four attractors, the performance difference between the top-down RNNG and the other variants is statistically significant at p < 0.05.",
"The success of the top-down traversal in the domain of number-agreement prediction is consistent with a classical view in parsing that argues top-down parsing is the most human-like parsing strategy since it is the most anticipatory.",
"Only anticipatory representations, it is said, could explain the rapid, incremental processing that humans seem to exhibit (Marslen-Wilson, 1973; Tanenhaus et al., 1995) ; this line of thinking similarly motivates Charniak (2010) , among others.",
"While most work in this domain has been concerned with the parsing problem, our findings suggest that anticipatory mechanisms are also beneficial in capturing structural dependencies in language modeling.",
"We note that our results are achieved using models that, in theory, are able to condition on the entire derivation history, while earlier work in sentence processing has focused on cognitive memory considerations, such as the memory-bounded model of Schuler et al.",
"(2010) .",
"Conclusion Given enough capacity, LSTMs trained on language modeling objectives are able to learn syntax-sensitive dependencies, as evidenced by accurate number agreement accuracy with multiple attractors.",
"Despite this strong performance, we discover explicit modeling of structure does improve the model's ability to discover non-local structural dependencies when determining the distribution over subsequent word generation.",
"Recurrent neural network grammars (RNNGs), which jointly model phrase-structure trees and strings and employ an explicit composition operator, substantially outperform LSTM language models and syntactic language models without explicit compositions; this highlights the importance of a hierarchical inductive bias in capturing structural dependencies.",
"We explore the possibility that how the structure is built affects number agreement performance.",
"Through novel extensions to RNNGs that enable the use of left-corner and bottom-up generation strategies, we discover that this is indeed the case: the three RNNG variants have different generalization properties for number agreement, with the top-down traversal strategy performing best for cases with multiple attractors."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"4.3",
"5"
],
"paper_header_content": [
"Introduction",
"Number Agreement with LSTM Language Models",
"Number Agreement with RNNGs",
"Recurrent Neural Network Grammars",
"Experiments",
"Further Analysis",
"Top-Down, Left-Corner, and Bottom-Up Traversals",
"Bottom-Up Traversal",
"Left-Corner Traversal",
"Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-30#paper-1041#slide-27 | Why Does The Build Order Matter | The three different strategies yield different intermediate states during the generation process and impose different biases on the learner.
Earlier work in parsing has characterised the strategies plausibility in
Resnik, 1992). We evaluate these strategies as models of generation
(Manning and Carpenter, 1997) in terms of number agreement accuracy.
LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | The three different strategies yield different intermediate states during the generation process and impose different biases on the learner.
Earlier work in parsing has characterised the strategies plausibility in
Resnik, 1992). We evaluate these strategies as models of generation
(Manning and Carpenter, 1997) in terms of number agreement accuracy.
LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | [] |
GEM-SciDuet-train-30#paper-1041#slide-28 | 1041 | LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modeling Structure Makes Them Better | Language exhibits hierarchical structure, but recent work using a subject-verb agreement diagnostic argued that state-ofthe-art language models, LSTMs, fail to learn long-range syntax-sensitive dependencies. Using the same diagnostic, we show that, in fact, LSTMs do succeed in learning such dependencies-provided they have enough capacity. We then explore whether models that have access to explicit syntactic information learn agreement more effectively, and how the way in which this structural information is incorporated into the model impacts performance. We find that the mere presence of syntactic information does not improve accuracy, but when model architecture is determined by syntax, number agreement is improved. Further, we find that the choice of how syntactic structure is built affects how well number agreement is learned: top-down construction outperforms leftcorner and bottom-up variants in capturing long-distance structural dependencies. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189
],
"paper_content_text": [
"Introduction Recurrent neural networks (RNNs) are remarkably effective models of sequential data.",
"Recent years have witnessed the widespread adoption of recurrent architectures such as LSTMs (Hochreiter and Schmidhuber, 1997) in various NLP tasks, with state of the art results in language modeling and conditional generation tasks like machine translation (Bahdanau et al., 2015) and text summarization (See et al., 2017) .",
"Here we revisit the question asked by Linzen et al.",
"(2016) : as RNNs model word sequences without explicit notions of hierarchical structure, Figure 1 : An example of the number agreement task with two attractors and a subject-verb distance of five.",
"to what extent are these models able to learn non-local syntactic dependencies in natural language?",
"Identifying number agreement between subjects and verbs-especially in the presence of attractors-can be understood as a cognitivelymotivated probe that seeks to distinguish hierarchical theories from sequential ones, as models that rely on sequential cues like the most recent noun would favor the incorrect verb form.",
"We provide an example of this task in Fig.",
"1 , where the plural form of the verb have agrees with the distant subject parts, rather than the adjacent attractors (underlined) of the singular form.",
"Contrary to the findings of Linzen et al.",
"(2016) , our experiments suggest that sequential LSTMs are able to capture structural dependencies to a large extent, even for cases with multiple attractors ( §2).",
"Our finding suggests that network capacity plays a crucial role in capturing structural dependencies with multiple attractors.",
"Nevertheless, we find that a strong character LSTM language model-which lacks explicit word representation and has to capture much longer sequential dependencies in order to learn non-local structural dependencies effectively-performs much worse in the number agreement task.",
"Given the strong performance of word-based LSTM language models, are there are any substantial benefits, in terms of number agreement accuracy, to explicitly modeling hierarchical structures as an inductive bias?",
"We discover that a certain class of LSTM language models that explicitly models syntactic structures, the recurrent neural network grammars (Dyer et al., 2016, RN-NGs) , considerably outperforms sequential LSTM language models for cases with multiple attractors ( §3).",
"We present experiments affirming that this gain is due to an explicit composition operator rather than the presence of predicted syntactic annotations.",
"Rather surprisingly, syntactic LSTM language models without explicit composition have no advantage over sequential LSTMs that operate on word sequences, although these models can nevertheless be excellent predictors of phrase structures (Choe and Charniak, 2016) .",
"Having established the importance of modeling structures, we explore the hypothesis that how we build the structure affects the model's ability to identify structural dependencies in English.",
"As RNNGs build phrase-structure trees through top-down operations, we propose extensions to the structure-building sequences and model architecture that enable left-corner (Henderson, 2003 (Henderson, , 2004 and bottom-up (Chelba and Jelinek, 2000; Emami and Jelinek, 2005) generation orders ( §4).",
"Extensive prior work has characterized topdown, left-corner, and bottom-up parsing strategies in terms of cognitive plausibility (Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992) and neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we move away from the realm of parsing and evaluate the three strategies as models of generation instead, and address the following empirical question: which generation order is most appropriately biased to model structural dependencies in English, as indicated by number agreement accuracy?",
"Our key finding is that the top-down generation outperforms left-corner and bottom-up variants for difficult cases with multiple attractors.",
"In theory, the three traversal strategies approximate the same chain rule that decompose the joint probability of words and phrase-structure trees, denoted as p(x, y), differently and as such will impose different biases on the learner.",
"In §4.3, we show that the three variants achieve similar perplexities on a held-out validation set.",
"As we observe different patterns in number agreement, this demonstrates that while perplexity can be a useful diagnostic tool, it may not be sensitive enough for comparing models in terms of how well they capture grammatical intuitions.",
"Number Agreement with LSTM Language Models We revisit the number agreement task with LSTMs trained on language modeling objectives, as proposed by Linzen et al.",
"(2016) .",
"Experimental Settings.",
"We use the same parsed Wikipedia corpus, verb inflectors, preprocessing steps, and dataset split as Linzen et al.",
"(2016) .",
"1 Word types beyond the most frequent 10,000 are converted to their respective POS tags.",
"We summarize the corpus statistics of the dataset, along with the test set distribution of the number of attractors, in Table 1 .",
"Similar to Linzen et al.",
"(2016) , we only include test cases where all intervening nouns are of the opposite number forms than the subject noun.",
"All models are implemented using the DyNet library (Neubig et al., 2017) .",
"Training was done using a language modeling objective that predicts the next word given the prefix; at test time we compute agreement error rates by comparing the probability of the correct verb form with the incorrect one.",
"We report performance of a few different LSTM hidden layer configurations, while other hyper-parameters are selected based on a grid search.",
"2 Following Linzen Table 2 indicates that, given enough capacity, LSTM language models without explicit syntactic supervision are able to perform well in number agreement.",
"For cases with multiple attractors, we observe that the LSTM language model with 50 hidden units trails behind its larger counterparts by a substantial margin despite comparable performance for zero attractor cases, suggesting that network capacity plays an especially important role in propagating relevant structural information across a large number of steps.",
"5 Our experiment independently derives the same finding as the recent work of Gulordava et al.",
"(2018) , who also find that LSTMs trained with language modeling objectives are able to learn number agreement well; here we additionally identify model capacity as one of the reasons for the discrepancy with the Linzen et al.",
"(2016) results.",
"While the pretrained large-scale language model of Jozefowicz et al.",
"(2016) has certain advantages in terms of model capacity, more training data, and richer vocabulary, we suspect that the poorer performance is due to differences between their training domain and the number agreement testing domain, although the model still performs reasonably well in the number agreement test set.",
"Prior work has confirmed the notion that, in many cases, statistical models are able to achieve good performance under some aggregate metric by overfitting to patterns that are predictive in most cases, often at the expense of more difficult, infrequent instances that require deeper language understanding abilities (Rimell et al., 2009; Jia and Liang, 2017) .",
"In the vast majority of cases, structural dependencies between subjects and verbs highly overlap with sequential dependencies (Table 1).",
"Nevertheless, the fact that number agreement accuracy gets worse as the number of attractors increases is consistent with a sequential recency bias in LSTMs: under this conjecture, identifying the correct structural dependency becomes harder when there are more adjacent nouns of different number forms than the true subject.",
"If the sequential recency conjecture is correct, then LSTMs would perform worse when the structural dependency is more distant in the sequences, compared to cases where the structural dependency is more adjacent.",
"We empirically test this conjecture by running a strong character-based LSTM language model of that achieved state of the art results on EnWiki8 from the Hutter Prize dataset (Hutter, 2012) , with 1,800 hidden units and 10 million parameters.",
"The character LSTM is trained, validated, and tested 6 on the same split of the Linzen et al.",
"(2016) number agreement dataset.",
"A priori, we expect that number agreement is harder for character LSTMs for two reasons.",
"First, character LSTMs lack explicit word representaforms much better for cases with multiple attractors.",
"tions, thus succeeding in this task requires identifying structural dependencies between two sequences of character tokens, while word-based LSTMs only need to resolve dependencies between word tokens.",
"Second, by nature of modeling characters, non-local structural dependencies are sequentially further apart than in the wordbased language model.",
"On the other hand, character LSTMs have the ability to exploit and share informative morphological cues, such as the fact that plural nouns in English tend to end with 's'.",
"As demonstrated on the last row of Table 2 , we find that the character LSTM language model performs much worse at number agreement with multiple attractors compared to its word-based counterparts.",
"This finding is consistent with that of Sennrich (2017) , who find that character-level decoders in neural machine translation perform worse than subword models in capturing morphosyntactic agreement.",
"To some extent, our finding demonstrates the limitations that character LSTMs face in learning structure from language modeling objectives, despite earlier evidence that character LSTM language models are able to implicitly acquire a lexicon (Le Godais et al., 2017) .",
"Number Agreement with RNNGs Given the strong performance of sequential LSTMs in number agreement, is there any further benefit to explicitly modeling hierarchical structures?",
"We focus on recurrent neural network grammars (Dyer et al., 2016, RNNGs) , which jointly model the probability of phrase-structure trees and strings, p(x, y), through structurebuilding actions and explicit compositions for representing completed constituents.",
"Our choice of RNNGs is motivated by the findings of Kuncoro et al.",
"(2017) , who find evidence for syntactic headedness in RNNG phrasal representations.",
"Intuitively, the ability to learn heads is beneficial for this task, as the representation for the noun phrase \"The flowers in the vase\" would be similar to the syntactic head flowers rather than vase.",
"In some sense, the composition operator can be understood as injecting a structural recency bias into the model design, as subjects and verbs that are sequentially apart are encouraged to be close together in the RNNGs' representation.",
"Recurrent Neural Network Grammars RNNGs (Dyer et al., 2016) are language models that estimate the joint probability of string terminals and phrase-structure tree nonterminals.",
"Here we use stack-only RNNGs that achieve better perplexity and parsing performance (Kuncoro et al., 2017) .",
"Given the current stack configuration, the objective function of RNNGs is to predict the correct structure-building operation according to a top-down, left-to-right traversal of the phrasestructure tree; a partial traversal for the input sentence \"The flowers in the vase are blooming\" is illustrated in Fig.",
"3(a) .",
"7 The structural inductive bias of RNNGs derives from an explicit composition operator that represents completed constituents; for instance, the constituent (NP The flowers) is represented by a single composite element on the stack, rather than as four separate symbols.",
"During each REDUCE action, the topmost stack elements that belong to the new constituent are popped from the stack and then composed by the composition function; the composed symbol is then pushed back into the stack.",
"The model is trained in an end-to-end manner by minimizing the cross-entropy loss relative to a sample of gold trees.",
"Experiments Here we summarize the experimental settings of running RNNGs on the number agreement dataset and discuss the empirical findings.",
"Experimental settings.",
"We obtain phrasestructure trees for the Linzen et al.",
"(2016) dataset using a publicly available discriminative model 8 trained on the Penn Treebank (Marcus et al., 1993) .",
"At training time, we use these predicted trees to derive action sequences on the training set, and train the RNNG model on these sequences.",
"9 At test time, we compare the probabilities of the correct and incorrect verb forms given the prefix, which now includes both nonterminal and terminal symbols.",
"An example of the stack contents (i.e.",
"the prefix) when predicting the verb is provided in Fig.",
"3(a) .",
"We similarly run a grid search over the same hyper-parameter range as the sequential LSTM and compare the results with the strongest sequential LSTM baseline from §2.",
"Figure 2 : Number agreement error rates for sequential LSTM language models (left), sequential syntactic LSTM language models (Choe and Charniak, 2016, center), and RNNGs (right).",
"Discussion.",
"Fig.",
"2 shows that RNNGs (rightmost) achieve much better number agreement accuracy compared to LSTM language models (leftmost) for difficult cases with four and five attractors, with around 30% error rate reductions, along with a 13% error rate reduction (from 9% to 7.8%) for three attractors.",
"We attribute the slightly worse performance of RNNGs on cases with zero and one attractor to the presence of intervening structure-building actions that separate the subject and the verb, such as a REDUCE (step 6 in Fig.",
"3(a) ) action to complete the noun phrase and at least one action to predict a verb phrase (step 15 in Fig.",
"3(a) ) before the verb itself is introduced, while LSTM language models benefit from shorter dependencies for zero and one attractor cases.",
"The performance gain of RNNGs might arise from two potential causes.",
"First, RNNGs have access to predicted syntactic annotations, while LSTM language models operate solely on word sequences.",
"Second, RNNGs incorporate explicit compositions, which encourage hierarhical representations and potentially the discovery of syntactic (rather than sequential) dependencies.",
"Would LSTMs that have access to syntactic annotations, but without the explicit composition function, benefit from the same performance gain as RNNGs?",
"To answer this question, we run sequential LSTMs over the same phrase-structure trees (Choe and Charniak, 2016) , similarly estimating the joint probability of phrase-structure nonterminals and string terminals but without an explicit composition operator.",
"Taking the example in Fig.",
"3(a) , the sequential syntactic LSTM would have fifteen 10 symbols on the LSTM when predicting the verb, as opposed to three symbols in the case of RNNGs' stack LSTM.",
"In theory, the sequential LSTM over the phrase-structure trees (Choe and Charniak, 2016) may be able to incorporate a similar, albeit implicit, composition process as RNNGs and consequently derive similarly syntactic heads, although there is no inductive bias that explicitly encourages such process.",
"Fig.",
"2 suggests that the sequential syntactic LSTMs (center) perform comparably with sequential LSTMs without syntax for multiple attractor cases, and worse than RNNGs for nearly all attractors; the gap is highest for multiple attractors.",
"This result showcases the importance of an explicit composition operator and hierarchical representations in identifying structural dependencies, as indicated by number agreement accuracy.",
"Our finding is consistent with the recent work of Yogatama et al.",
"(2018) , who find that introducing elements of hierarchical modeling through a stackstructured memory is beneficial for number agreement, outperforming LSTM language models and attention-augmented variants by increasing margins as the number of attractor grows.",
"Further Analysis In order to better interpret the results, we conduct further analysis into the perplexities of each model, followed by a discussion on the effect of incrementality constraints on the RNNG when predicting number agreement.",
"Perplexity.",
"To what extent does the success of RNNGs in the number agreement task with multiple attractors correlate with better performance under the perplexity metric?",
"We answer this question by using an importance sampling marginalization procedure (Dyer et al., 2016) to obtain an estimate of p(x) under both RNNGs and the sequential syntactic LSTM model.",
"Following Dyer et al.",
"(2016) , for each sentence on the validation set we sample 100 candidate trees from a discriminative model 11 as our proposal distribution.",
"As demonstrated in Table 3 , the LSTM language model has the lowest validation set perplexity despite substantially worse performance than RN-NGs in number agreement with multiple attractors, suggesting that lower perplexity is not neces-sarily correlated with number agreement success.",
"Incrementality constraints.",
"As the syntactic prefix was derived from a discriminative model that has access to unprocessed words, one potential concern is that this prefix might violate the incrementality constraints and benefit the RNNG over the LSTM language model.",
"To address this concern, we remark that the empirical evidence from Fig.",
"2 and Table 3 indicates that the LSTM language model without syntactic annotation outperforms the sequential LSTM with syntactic annotation in terms of both perplexity and number agreement throughout nearly all attractor settings, suggesting that the predicted syntactic prefix does not give any unfair advantage to the syntactic models.",
"Furthermore, we run an experiment where the syntactic prefix is instead derived from an incremental beam search procedure of Fried et al.",
"(2017) .",
"12 To this end, we take the highest scoring beam entry at the time that the verb is generated to be the syntactic prefix; this procedure is applied to both the correct and incorrect verb forms.",
"13 We then similarly compare the probabilities of the correct and incorrect verb form given each respective syntactic prefix to obtain number agreement accuracy.",
"Our finding suggests that using the fully incremental tree prefix leads to even better RNNG number agreement performance for four and five attractors, achieving 7.1% and 8.2% error rates, respectively, compared to 9.4% and 12% for the RNNG error rates in Fig.",
"2 .",
"Top-Down, Left-Corner, and Bottom-Up Traversals In this section, we propose two new variants of RNNGs that construct trees using a different con- 12 As the beam search procedure is time-consuming, we randomly sample 500 cases for each attractor and compute the number agreement accuracy on these samples.",
"13 Consequently, the correct and incorrect forms of the sentence might have different partial trees, as the highest scoring beam entries may be different for each alternative.",
"struction order than the top-down, left-to-right order used above.",
"These are a bottom-up construction order ( §4.1) and a left-corner construction order ( §4.2), analogous to the well-known parsing strategies (e.g.",
"Hale, 2014, chapter 3).",
"They differ from these classic strategies insofar as they do not announce the phrase-structural content of an entire branch at the same time, adopting instead a node-by-node enumeration reminescent of Markov Grammars (Charniak, 1997) .",
"This stepby-step arrangement extends to the derived string as well; since all variants generate words from left to right, the models can be compared using number agreement as a diagnostic.",
"14 Here we state our hypothesis on why the build order matters.",
"The three generation strategies represent different chain rule decompositions of the joint probability of strings and phrase-structure trees, thereby imposing different biases on the learner.",
"Earlier work in parsing has characterized the plausibility of top-down, left-corner, and bottom-up strategies as viable candidates of human sentence processing, especially in terms of memory constraints and human difficulties with center embedding constructions (Johnson-Laird, 1983; Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992 , inter alia), along with neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we cast the three strategies as models of language generation (Manning and Carpenter, 1997), and focus on the empirical question: which generation order has the most appropriate bias in modeling non-local structural dependencies in English?",
"These alternative orders organize the learning problem so as to yield intermediate states in generation that condition on different aspects of the grammatical structure.",
"In number agreement, this amounts to making an agreement controller, such as the word flowers in Fig.",
"3 , more or less salient.",
"If it is more salient, the model should be better-able to inflect the main verb in agreement with this controller, without getting distracted by the attractors.",
"The three proposed build orders are compared in Fig.",
"3 , showing the respective configurations (i.e.",
"the prefix) when generating the main verb in a sentence with a single attractor.",
"15 In ad-dition, we show concrete action sequences for a simpler sentence in each section.",
"Bottom-Up Traversal In bottom-up traversals, phrases are recursively constructed and labeled with the nonterminal type once all their daughters have been built, as illustrated in Fig.",
"4 .",
"Bottom-up traversals benefit from shorter stack depths compared to top-down due to the lack of incomplete nonterminals.",
"As the commitment to label the nonterminal type of a phrase is delayed until its constituents are complete, this means that the generation of a child node cannot condition on the label of its parent node.",
"In n-ary branching trees, bottom-up completion of constituents requires a procedure for determining how many of the most recent elements on the stack should be daughters of the node that is being constructed.",
"16 Conceptually, rather than having a single REDUCE operation as we have before, we have a complex REDUCE(X, n) operation that must determine the type of the constituent (i.e., X) as well as the number of daughters (i.e., n).",
"In step 5 of Fig.",
"4 , the newly formed NP constituent only covers the terminal worms, and neither the unattached terminal eats nor the constituent (NP The fox) is part of the new noun phrase.",
"We implement this extent decision using a stick-breaking construction-using the stack LSTM encoding, a single-layer feedforward network, and a logistic output layer-which decides whether the top element on the stack should be the leftmost child of the new constituent (i.e.",
"whether or not the new constituent is complete after popping the current topmost stack element), as illustrated in Fig.",
"5 .",
"If not, the process is then repeated after the topmost stack element is popped.",
"Once the extent of the new nonterminal has been decided, we parameterize the decision of the nonterminal label type; in Fig.",
"5 this is an NP.",
"A second difference to top-down generation is that when a single constituent remains on the stack, the sentence is not necessarily complete (see step 3 of Fig.",
"4 for examples where this happens).",
"We thus introduce an explicit STOP action (step 8, Fig.",
"4) , indicating the tree is complete, which is only assigned non-zero probability when the stack has a during the history of the full generation process vary considerably in the invariances and the kinds of actions they predict.",
"16 This mechanism is not necessary with strictly binary branching trees, since each new nonterminal always consists of the two children at the top of the stack.",
"Left-Corner Traversal Left-corner traversals combine some aspects of top-down and bottom-up processing.",
"As illustrated in Fig.",
"6 , this works by first generating the leftmost terminal of the tree, The (step 0), before proceeding bottom-up to predict its parent NP (step 1) and then top-down to predict the rest of its children (step 2).",
"A REDUCE action similarly calls the composition operator once the phrase is complete (e.g.",
"step 3).",
"The complete constituent (NP The fox) is the leftmost child of its parent node, thus an NT SW(S) action is done next (step 4).",
"The NT SW(X) action is similar to the NT(X) from the top-down generator, in that it introduces an open nonterminal node and must be matched later by a corresponding REDUCE operation, but, in addition, swaps the two topmost elements at the top of the stack.",
"This is necessary because the parent nonterminal node is not built until after its left-most child has been constructed.",
"In step 1 of Fig.",
"6 , with a single element The on the stack, the action NT SW(NP) adds the open nonterminal symbol NP to become the topmost stack element, but after applying the swap operator the stack now contains (NP | The (step 2).",
"Experiments We optimize the hyper-parameters of each RNNG variant using grid searches based on validation set perplexity.",
"Table 4 summarizes average stack depths and perplexities 17 on the Linzen et al.",
"(2016) validation set.",
"We evaluate each of the variants in terms of number agreement accuracy as an evidence of its suitability to model structural dependencies in English, presented in Table 5 .",
"To account for randomness in training, we report the error rate summary statistics of ten different runs.",
"Avg.",
"(±sdev)/min/max n=2 n=3 n=4 LM 5.8(±0.2)/5.5/6.0 9.6(±0.7)/8.8/10.1 14.1(±1.2)/13.0/15.3 TD 5.5(±0.4)/4.9/5.8 7.8(±0.6)/7.4/8.0 8.9(±1.1)/7.9/9.8 LC 5.4(±0.3)/5.2/5.5 8.2(±0.4)/7.9/8.7 9.9(±1.3)/8.8/11.5 BU 5.7(±0.3) 5.5/5.8 8.5(±0.7)/8.0/9.3 9.7(±1.1)/9.0/11.3 Table 5 : Number agreement error rates for topdown (TD), left-corner (LC), and bottom-up (BU) RNNGs, broken down by the number of attractors.",
"LM indicates the best sequential language model baseline ( §2).",
"We report the mean, standard deviation, and minimum/maximum of 10 different random seeds of each model.",
"Discussion.",
"In Table 5 , we focus on empirical results for cases where the structural dependencies matter the most, corresponding to cases with two, three, and four attractors.",
"All three RNNG variants outperform the sequential LSTM language model baseline for these cases.",
"Nevertheless, the top-down variant outperforms both left-corner and bottom-up strategies for difficult cases with three or more attractors, suggesting that the top-down strategy is most appropriately biased to model difficult number agreement dependencies in English.",
"We run an approximate randomization test by stratifying the output and permuting within each stratum (Yeh, 2000) and find that, for four attractors, the performance difference between the top-down RNNG and the other variants is statistically significant at p < 0.05.",
"The success of the top-down traversal in the domain of number-agreement prediction is consistent with a classical view in parsing that argues top-down parsing is the most human-like parsing strategy since it is the most anticipatory.",
"Only anticipatory representations, it is said, could explain the rapid, incremental processing that humans seem to exhibit (Marslen-Wilson, 1973; Tanenhaus et al., 1995) ; this line of thinking similarly motivates Charniak (2010) , among others.",
"While most work in this domain has been concerned with the parsing problem, our findings suggest that anticipatory mechanisms are also beneficial in capturing structural dependencies in language modeling.",
"We note that our results are achieved using models that, in theory, are able to condition on the entire derivation history, while earlier work in sentence processing has focused on cognitive memory considerations, such as the memory-bounded model of Schuler et al.",
"(2010) .",
"Conclusion Given enough capacity, LSTMs trained on language modeling objectives are able to learn syntax-sensitive dependencies, as evidenced by accurate number agreement accuracy with multiple attractors.",
"Despite this strong performance, we discover explicit modeling of structure does improve the model's ability to discover non-local structural dependencies when determining the distribution over subsequent word generation.",
"Recurrent neural network grammars (RNNGs), which jointly model phrase-structure trees and strings and employ an explicit composition operator, substantially outperform LSTM language models and syntactic language models without explicit compositions; this highlights the importance of a hierarchical inductive bias in capturing structural dependencies.",
"We explore the possibility that how the structure is built affects number agreement performance.",
"Through novel extensions to RNNGs that enable the use of left-corner and bottom-up generation strategies, we discover that this is indeed the case: the three RNNG variants have different generalization properties for number agreement, with the top-down traversal strategy performing best for cases with multiple attractors."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"4.3",
"5"
],
"paper_header_content": [
"Introduction",
"Number Agreement with LSTM Language Models",
"Number Agreement with RNNGs",
"Recurrent Neural Network Grammars",
"Experiments",
"Further Analysis",
"Top-Down, Left-Corner, and Bottom-Up Traversals",
"Bottom-Up Traversal",
"Left-Corner Traversal",
"Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-30#paper-1041#slide-28 | Bottom up Traversal | x, y: (S (NP the hungry cat) (VP meows))
LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | x, y: (S (NP the hungry cat) (VP meows))
LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | [] |
GEM-SciDuet-train-30#paper-1041#slide-29 | 1041 | LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modeling Structure Makes Them Better | Language exhibits hierarchical structure, but recent work using a subject-verb agreement diagnostic argued that state-ofthe-art language models, LSTMs, fail to learn long-range syntax-sensitive dependencies. Using the same diagnostic, we show that, in fact, LSTMs do succeed in learning such dependencies-provided they have enough capacity. We then explore whether models that have access to explicit syntactic information learn agreement more effectively, and how the way in which this structural information is incorporated into the model impacts performance. We find that the mere presence of syntactic information does not improve accuracy, but when model architecture is determined by syntax, number agreement is improved. Further, we find that the choice of how syntactic structure is built affects how well number agreement is learned: top-down construction outperforms leftcorner and bottom-up variants in capturing long-distance structural dependencies. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189
],
"paper_content_text": [
"Introduction Recurrent neural networks (RNNs) are remarkably effective models of sequential data.",
"Recent years have witnessed the widespread adoption of recurrent architectures such as LSTMs (Hochreiter and Schmidhuber, 1997) in various NLP tasks, with state of the art results in language modeling and conditional generation tasks like machine translation (Bahdanau et al., 2015) and text summarization (See et al., 2017) .",
"Here we revisit the question asked by Linzen et al.",
"(2016) : as RNNs model word sequences without explicit notions of hierarchical structure, Figure 1 : An example of the number agreement task with two attractors and a subject-verb distance of five.",
"to what extent are these models able to learn non-local syntactic dependencies in natural language?",
"Identifying number agreement between subjects and verbs-especially in the presence of attractors-can be understood as a cognitivelymotivated probe that seeks to distinguish hierarchical theories from sequential ones, as models that rely on sequential cues like the most recent noun would favor the incorrect verb form.",
"We provide an example of this task in Fig.",
"1 , where the plural form of the verb have agrees with the distant subject parts, rather than the adjacent attractors (underlined) of the singular form.",
"Contrary to the findings of Linzen et al.",
"(2016) , our experiments suggest that sequential LSTMs are able to capture structural dependencies to a large extent, even for cases with multiple attractors ( §2).",
"Our finding suggests that network capacity plays a crucial role in capturing structural dependencies with multiple attractors.",
"Nevertheless, we find that a strong character LSTM language model-which lacks explicit word representation and has to capture much longer sequential dependencies in order to learn non-local structural dependencies effectively-performs much worse in the number agreement task.",
"Given the strong performance of word-based LSTM language models, are there are any substantial benefits, in terms of number agreement accuracy, to explicitly modeling hierarchical structures as an inductive bias?",
"We discover that a certain class of LSTM language models that explicitly models syntactic structures, the recurrent neural network grammars (Dyer et al., 2016, RN-NGs) , considerably outperforms sequential LSTM language models for cases with multiple attractors ( §3).",
"We present experiments affirming that this gain is due to an explicit composition operator rather than the presence of predicted syntactic annotations.",
"Rather surprisingly, syntactic LSTM language models without explicit composition have no advantage over sequential LSTMs that operate on word sequences, although these models can nevertheless be excellent predictors of phrase structures (Choe and Charniak, 2016) .",
"Having established the importance of modeling structures, we explore the hypothesis that how we build the structure affects the model's ability to identify structural dependencies in English.",
"As RNNGs build phrase-structure trees through top-down operations, we propose extensions to the structure-building sequences and model architecture that enable left-corner (Henderson, 2003 (Henderson, , 2004 and bottom-up (Chelba and Jelinek, 2000; Emami and Jelinek, 2005) generation orders ( §4).",
"Extensive prior work has characterized topdown, left-corner, and bottom-up parsing strategies in terms of cognitive plausibility (Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992) and neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we move away from the realm of parsing and evaluate the three strategies as models of generation instead, and address the following empirical question: which generation order is most appropriately biased to model structural dependencies in English, as indicated by number agreement accuracy?",
"Our key finding is that the top-down generation outperforms left-corner and bottom-up variants for difficult cases with multiple attractors.",
"In theory, the three traversal strategies approximate the same chain rule that decompose the joint probability of words and phrase-structure trees, denoted as p(x, y), differently and as such will impose different biases on the learner.",
"In §4.3, we show that the three variants achieve similar perplexities on a held-out validation set.",
"As we observe different patterns in number agreement, this demonstrates that while perplexity can be a useful diagnostic tool, it may not be sensitive enough for comparing models in terms of how well they capture grammatical intuitions.",
"Number Agreement with LSTM Language Models We revisit the number agreement task with LSTMs trained on language modeling objectives, as proposed by Linzen et al.",
"(2016) .",
"Experimental Settings.",
"We use the same parsed Wikipedia corpus, verb inflectors, preprocessing steps, and dataset split as Linzen et al.",
"(2016) .",
"1 Word types beyond the most frequent 10,000 are converted to their respective POS tags.",
"We summarize the corpus statistics of the dataset, along with the test set distribution of the number of attractors, in Table 1 .",
"Similar to Linzen et al.",
"(2016) , we only include test cases where all intervening nouns are of the opposite number forms than the subject noun.",
"All models are implemented using the DyNet library (Neubig et al., 2017) .",
"Training was done using a language modeling objective that predicts the next word given the prefix; at test time we compute agreement error rates by comparing the probability of the correct verb form with the incorrect one.",
"We report performance of a few different LSTM hidden layer configurations, while other hyper-parameters are selected based on a grid search.",
"2 Following Linzen Table 2 indicates that, given enough capacity, LSTM language models without explicit syntactic supervision are able to perform well in number agreement.",
"For cases with multiple attractors, we observe that the LSTM language model with 50 hidden units trails behind its larger counterparts by a substantial margin despite comparable performance for zero attractor cases, suggesting that network capacity plays an especially important role in propagating relevant structural information across a large number of steps.",
"5 Our experiment independently derives the same finding as the recent work of Gulordava et al.",
"(2018) , who also find that LSTMs trained with language modeling objectives are able to learn number agreement well; here we additionally identify model capacity as one of the reasons for the discrepancy with the Linzen et al.",
"(2016) results.",
"While the pretrained large-scale language model of Jozefowicz et al.",
"(2016) has certain advantages in terms of model capacity, more training data, and richer vocabulary, we suspect that the poorer performance is due to differences between their training domain and the number agreement testing domain, although the model still performs reasonably well in the number agreement test set.",
"Prior work has confirmed the notion that, in many cases, statistical models are able to achieve good performance under some aggregate metric by overfitting to patterns that are predictive in most cases, often at the expense of more difficult, infrequent instances that require deeper language understanding abilities (Rimell et al., 2009; Jia and Liang, 2017) .",
"In the vast majority of cases, structural dependencies between subjects and verbs highly overlap with sequential dependencies (Table 1).",
"Nevertheless, the fact that number agreement accuracy gets worse as the number of attractors increases is consistent with a sequential recency bias in LSTMs: under this conjecture, identifying the correct structural dependency becomes harder when there are more adjacent nouns of different number forms than the true subject.",
"If the sequential recency conjecture is correct, then LSTMs would perform worse when the structural dependency is more distant in the sequences, compared to cases where the structural dependency is more adjacent.",
"We empirically test this conjecture by running a strong character-based LSTM language model of that achieved state of the art results on EnWiki8 from the Hutter Prize dataset (Hutter, 2012) , with 1,800 hidden units and 10 million parameters.",
"The character LSTM is trained, validated, and tested 6 on the same split of the Linzen et al.",
"(2016) number agreement dataset.",
"A priori, we expect that number agreement is harder for character LSTMs for two reasons.",
"First, character LSTMs lack explicit word representaforms much better for cases with multiple attractors.",
"tions, thus succeeding in this task requires identifying structural dependencies between two sequences of character tokens, while word-based LSTMs only need to resolve dependencies between word tokens.",
"Second, by nature of modeling characters, non-local structural dependencies are sequentially further apart than in the wordbased language model.",
"On the other hand, character LSTMs have the ability to exploit and share informative morphological cues, such as the fact that plural nouns in English tend to end with 's'.",
"As demonstrated on the last row of Table 2 , we find that the character LSTM language model performs much worse at number agreement with multiple attractors compared to its word-based counterparts.",
"This finding is consistent with that of Sennrich (2017) , who find that character-level decoders in neural machine translation perform worse than subword models in capturing morphosyntactic agreement.",
"To some extent, our finding demonstrates the limitations that character LSTMs face in learning structure from language modeling objectives, despite earlier evidence that character LSTM language models are able to implicitly acquire a lexicon (Le Godais et al., 2017) .",
"Number Agreement with RNNGs Given the strong performance of sequential LSTMs in number agreement, is there any further benefit to explicitly modeling hierarchical structures?",
"We focus on recurrent neural network grammars (Dyer et al., 2016, RNNGs) , which jointly model the probability of phrase-structure trees and strings, p(x, y), through structurebuilding actions and explicit compositions for representing completed constituents.",
"Our choice of RNNGs is motivated by the findings of Kuncoro et al.",
"(2017) , who find evidence for syntactic headedness in RNNG phrasal representations.",
"Intuitively, the ability to learn heads is beneficial for this task, as the representation for the noun phrase \"The flowers in the vase\" would be similar to the syntactic head flowers rather than vase.",
"In some sense, the composition operator can be understood as injecting a structural recency bias into the model design, as subjects and verbs that are sequentially apart are encouraged to be close together in the RNNGs' representation.",
"Recurrent Neural Network Grammars RNNGs (Dyer et al., 2016) are language models that estimate the joint probability of string terminals and phrase-structure tree nonterminals.",
"Here we use stack-only RNNGs that achieve better perplexity and parsing performance (Kuncoro et al., 2017) .",
"Given the current stack configuration, the objective function of RNNGs is to predict the correct structure-building operation according to a top-down, left-to-right traversal of the phrasestructure tree; a partial traversal for the input sentence \"The flowers in the vase are blooming\" is illustrated in Fig.",
"3(a) .",
"7 The structural inductive bias of RNNGs derives from an explicit composition operator that represents completed constituents; for instance, the constituent (NP The flowers) is represented by a single composite element on the stack, rather than as four separate symbols.",
"During each REDUCE action, the topmost stack elements that belong to the new constituent are popped from the stack and then composed by the composition function; the composed symbol is then pushed back into the stack.",
"The model is trained in an end-to-end manner by minimizing the cross-entropy loss relative to a sample of gold trees.",
"Experiments Here we summarize the experimental settings of running RNNGs on the number agreement dataset and discuss the empirical findings.",
"Experimental settings.",
"We obtain phrasestructure trees for the Linzen et al.",
"(2016) dataset using a publicly available discriminative model 8 trained on the Penn Treebank (Marcus et al., 1993) .",
"At training time, we use these predicted trees to derive action sequences on the training set, and train the RNNG model on these sequences.",
"9 At test time, we compare the probabilities of the correct and incorrect verb forms given the prefix, which now includes both nonterminal and terminal symbols.",
"An example of the stack contents (i.e.",
"the prefix) when predicting the verb is provided in Fig.",
"3(a) .",
"We similarly run a grid search over the same hyper-parameter range as the sequential LSTM and compare the results with the strongest sequential LSTM baseline from §2.",
"Figure 2 : Number agreement error rates for sequential LSTM language models (left), sequential syntactic LSTM language models (Choe and Charniak, 2016, center), and RNNGs (right).",
"Discussion.",
"Fig.",
"2 shows that RNNGs (rightmost) achieve much better number agreement accuracy compared to LSTM language models (leftmost) for difficult cases with four and five attractors, with around 30% error rate reductions, along with a 13% error rate reduction (from 9% to 7.8%) for three attractors.",
"We attribute the slightly worse performance of RNNGs on cases with zero and one attractor to the presence of intervening structure-building actions that separate the subject and the verb, such as a REDUCE (step 6 in Fig.",
"3(a) ) action to complete the noun phrase and at least one action to predict a verb phrase (step 15 in Fig.",
"3(a) ) before the verb itself is introduced, while LSTM language models benefit from shorter dependencies for zero and one attractor cases.",
"The performance gain of RNNGs might arise from two potential causes.",
"First, RNNGs have access to predicted syntactic annotations, while LSTM language models operate solely on word sequences.",
"Second, RNNGs incorporate explicit compositions, which encourage hierarhical representations and potentially the discovery of syntactic (rather than sequential) dependencies.",
"Would LSTMs that have access to syntactic annotations, but without the explicit composition function, benefit from the same performance gain as RNNGs?",
"To answer this question, we run sequential LSTMs over the same phrase-structure trees (Choe and Charniak, 2016) , similarly estimating the joint probability of phrase-structure nonterminals and string terminals but without an explicit composition operator.",
"Taking the example in Fig.",
"3(a) , the sequential syntactic LSTM would have fifteen 10 symbols on the LSTM when predicting the verb, as opposed to three symbols in the case of RNNGs' stack LSTM.",
"In theory, the sequential LSTM over the phrase-structure trees (Choe and Charniak, 2016) may be able to incorporate a similar, albeit implicit, composition process as RNNGs and consequently derive similarly syntactic heads, although there is no inductive bias that explicitly encourages such process.",
"Fig.",
"2 suggests that the sequential syntactic LSTMs (center) perform comparably with sequential LSTMs without syntax for multiple attractor cases, and worse than RNNGs for nearly all attractors; the gap is highest for multiple attractors.",
"This result showcases the importance of an explicit composition operator and hierarchical representations in identifying structural dependencies, as indicated by number agreement accuracy.",
"Our finding is consistent with the recent work of Yogatama et al.",
"(2018) , who find that introducing elements of hierarchical modeling through a stackstructured memory is beneficial for number agreement, outperforming LSTM language models and attention-augmented variants by increasing margins as the number of attractor grows.",
"Further Analysis In order to better interpret the results, we conduct further analysis into the perplexities of each model, followed by a discussion on the effect of incrementality constraints on the RNNG when predicting number agreement.",
"Perplexity.",
"To what extent does the success of RNNGs in the number agreement task with multiple attractors correlate with better performance under the perplexity metric?",
"We answer this question by using an importance sampling marginalization procedure (Dyer et al., 2016) to obtain an estimate of p(x) under both RNNGs and the sequential syntactic LSTM model.",
"Following Dyer et al.",
"(2016) , for each sentence on the validation set we sample 100 candidate trees from a discriminative model 11 as our proposal distribution.",
"As demonstrated in Table 3 , the LSTM language model has the lowest validation set perplexity despite substantially worse performance than RN-NGs in number agreement with multiple attractors, suggesting that lower perplexity is not neces-sarily correlated with number agreement success.",
"Incrementality constraints.",
"As the syntactic prefix was derived from a discriminative model that has access to unprocessed words, one potential concern is that this prefix might violate the incrementality constraints and benefit the RNNG over the LSTM language model.",
"To address this concern, we remark that the empirical evidence from Fig.",
"2 and Table 3 indicates that the LSTM language model without syntactic annotation outperforms the sequential LSTM with syntactic annotation in terms of both perplexity and number agreement throughout nearly all attractor settings, suggesting that the predicted syntactic prefix does not give any unfair advantage to the syntactic models.",
"Furthermore, we run an experiment where the syntactic prefix is instead derived from an incremental beam search procedure of Fried et al.",
"(2017) .",
"12 To this end, we take the highest scoring beam entry at the time that the verb is generated to be the syntactic prefix; this procedure is applied to both the correct and incorrect verb forms.",
"13 We then similarly compare the probabilities of the correct and incorrect verb form given each respective syntactic prefix to obtain number agreement accuracy.",
"Our finding suggests that using the fully incremental tree prefix leads to even better RNNG number agreement performance for four and five attractors, achieving 7.1% and 8.2% error rates, respectively, compared to 9.4% and 12% for the RNNG error rates in Fig.",
"2 .",
"Top-Down, Left-Corner, and Bottom-Up Traversals In this section, we propose two new variants of RNNGs that construct trees using a different con- 12 As the beam search procedure is time-consuming, we randomly sample 500 cases for each attractor and compute the number agreement accuracy on these samples.",
"13 Consequently, the correct and incorrect forms of the sentence might have different partial trees, as the highest scoring beam entries may be different for each alternative.",
"struction order than the top-down, left-to-right order used above.",
"These are a bottom-up construction order ( §4.1) and a left-corner construction order ( §4.2), analogous to the well-known parsing strategies (e.g.",
"Hale, 2014, chapter 3).",
"They differ from these classic strategies insofar as they do not announce the phrase-structural content of an entire branch at the same time, adopting instead a node-by-node enumeration reminescent of Markov Grammars (Charniak, 1997) .",
"This stepby-step arrangement extends to the derived string as well; since all variants generate words from left to right, the models can be compared using number agreement as a diagnostic.",
"14 Here we state our hypothesis on why the build order matters.",
"The three generation strategies represent different chain rule decompositions of the joint probability of strings and phrase-structure trees, thereby imposing different biases on the learner.",
"Earlier work in parsing has characterized the plausibility of top-down, left-corner, and bottom-up strategies as viable candidates of human sentence processing, especially in terms of memory constraints and human difficulties with center embedding constructions (Johnson-Laird, 1983; Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992 , inter alia), along with neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we cast the three strategies as models of language generation (Manning and Carpenter, 1997), and focus on the empirical question: which generation order has the most appropriate bias in modeling non-local structural dependencies in English?",
"These alternative orders organize the learning problem so as to yield intermediate states in generation that condition on different aspects of the grammatical structure.",
"In number agreement, this amounts to making an agreement controller, such as the word flowers in Fig.",
"3 , more or less salient.",
"If it is more salient, the model should be better-able to inflect the main verb in agreement with this controller, without getting distracted by the attractors.",
"The three proposed build orders are compared in Fig.",
"3 , showing the respective configurations (i.e.",
"the prefix) when generating the main verb in a sentence with a single attractor.",
"15 In ad-dition, we show concrete action sequences for a simpler sentence in each section.",
"Bottom-Up Traversal In bottom-up traversals, phrases are recursively constructed and labeled with the nonterminal type once all their daughters have been built, as illustrated in Fig.",
"4 .",
"Bottom-up traversals benefit from shorter stack depths compared to top-down due to the lack of incomplete nonterminals.",
"As the commitment to label the nonterminal type of a phrase is delayed until its constituents are complete, this means that the generation of a child node cannot condition on the label of its parent node.",
"In n-ary branching trees, bottom-up completion of constituents requires a procedure for determining how many of the most recent elements on the stack should be daughters of the node that is being constructed.",
"16 Conceptually, rather than having a single REDUCE operation as we have before, we have a complex REDUCE(X, n) operation that must determine the type of the constituent (i.e., X) as well as the number of daughters (i.e., n).",
"In step 5 of Fig.",
"4 , the newly formed NP constituent only covers the terminal worms, and neither the unattached terminal eats nor the constituent (NP The fox) is part of the new noun phrase.",
"We implement this extent decision using a stick-breaking construction-using the stack LSTM encoding, a single-layer feedforward network, and a logistic output layer-which decides whether the top element on the stack should be the leftmost child of the new constituent (i.e.",
"whether or not the new constituent is complete after popping the current topmost stack element), as illustrated in Fig.",
"5 .",
"If not, the process is then repeated after the topmost stack element is popped.",
"Once the extent of the new nonterminal has been decided, we parameterize the decision of the nonterminal label type; in Fig.",
"5 this is an NP.",
"A second difference to top-down generation is that when a single constituent remains on the stack, the sentence is not necessarily complete (see step 3 of Fig.",
"4 for examples where this happens).",
"We thus introduce an explicit STOP action (step 8, Fig.",
"4) , indicating the tree is complete, which is only assigned non-zero probability when the stack has a during the history of the full generation process vary considerably in the invariances and the kinds of actions they predict.",
"16 This mechanism is not necessary with strictly binary branching trees, since each new nonterminal always consists of the two children at the top of the stack.",
"Left-Corner Traversal Left-corner traversals combine some aspects of top-down and bottom-up processing.",
"As illustrated in Fig.",
"6 , this works by first generating the leftmost terminal of the tree, The (step 0), before proceeding bottom-up to predict its parent NP (step 1) and then top-down to predict the rest of its children (step 2).",
"A REDUCE action similarly calls the composition operator once the phrase is complete (e.g.",
"step 3).",
"The complete constituent (NP The fox) is the leftmost child of its parent node, thus an NT SW(S) action is done next (step 4).",
"The NT SW(X) action is similar to the NT(X) from the top-down generator, in that it introduces an open nonterminal node and must be matched later by a corresponding REDUCE operation, but, in addition, swaps the two topmost elements at the top of the stack.",
"This is necessary because the parent nonterminal node is not built until after its left-most child has been constructed.",
"In step 1 of Fig.",
"6 , with a single element The on the stack, the action NT SW(NP) adds the open nonterminal symbol NP to become the topmost stack element, but after applying the swap operator the stack now contains (NP | The (step 2).",
"Experiments We optimize the hyper-parameters of each RNNG variant using grid searches based on validation set perplexity.",
"Table 4 summarizes average stack depths and perplexities 17 on the Linzen et al.",
"(2016) validation set.",
"We evaluate each of the variants in terms of number agreement accuracy as an evidence of its suitability to model structural dependencies in English, presented in Table 5 .",
"To account for randomness in training, we report the error rate summary statistics of ten different runs.",
"Avg.",
"(±sdev)/min/max n=2 n=3 n=4 LM 5.8(±0.2)/5.5/6.0 9.6(±0.7)/8.8/10.1 14.1(±1.2)/13.0/15.3 TD 5.5(±0.4)/4.9/5.8 7.8(±0.6)/7.4/8.0 8.9(±1.1)/7.9/9.8 LC 5.4(±0.3)/5.2/5.5 8.2(±0.4)/7.9/8.7 9.9(±1.3)/8.8/11.5 BU 5.7(±0.3) 5.5/5.8 8.5(±0.7)/8.0/9.3 9.7(±1.1)/9.0/11.3 Table 5 : Number agreement error rates for topdown (TD), left-corner (LC), and bottom-up (BU) RNNGs, broken down by the number of attractors.",
"LM indicates the best sequential language model baseline ( §2).",
"We report the mean, standard deviation, and minimum/maximum of 10 different random seeds of each model.",
"Discussion.",
"In Table 5 , we focus on empirical results for cases where the structural dependencies matter the most, corresponding to cases with two, three, and four attractors.",
"All three RNNG variants outperform the sequential LSTM language model baseline for these cases.",
"Nevertheless, the top-down variant outperforms both left-corner and bottom-up strategies for difficult cases with three or more attractors, suggesting that the top-down strategy is most appropriately biased to model difficult number agreement dependencies in English.",
"We run an approximate randomization test by stratifying the output and permuting within each stratum (Yeh, 2000) and find that, for four attractors, the performance difference between the top-down RNNG and the other variants is statistically significant at p < 0.05.",
"The success of the top-down traversal in the domain of number-agreement prediction is consistent with a classical view in parsing that argues top-down parsing is the most human-like parsing strategy since it is the most anticipatory.",
"Only anticipatory representations, it is said, could explain the rapid, incremental processing that humans seem to exhibit (Marslen-Wilson, 1973; Tanenhaus et al., 1995) ; this line of thinking similarly motivates Charniak (2010) , among others.",
"While most work in this domain has been concerned with the parsing problem, our findings suggest that anticipatory mechanisms are also beneficial in capturing structural dependencies in language modeling.",
"We note that our results are achieved using models that, in theory, are able to condition on the entire derivation history, while earlier work in sentence processing has focused on cognitive memory considerations, such as the memory-bounded model of Schuler et al.",
"(2010) .",
"Conclusion Given enough capacity, LSTMs trained on language modeling objectives are able to learn syntax-sensitive dependencies, as evidenced by accurate number agreement accuracy with multiple attractors.",
"Despite this strong performance, we discover explicit modeling of structure does improve the model's ability to discover non-local structural dependencies when determining the distribution over subsequent word generation.",
"Recurrent neural network grammars (RNNGs), which jointly model phrase-structure trees and strings and employ an explicit composition operator, substantially outperform LSTM language models and syntactic language models without explicit compositions; this highlights the importance of a hierarchical inductive bias in capturing structural dependencies.",
"We explore the possibility that how the structure is built affects number agreement performance.",
"Through novel extensions to RNNGs that enable the use of left-corner and bottom-up generation strategies, we discover that this is indeed the case: the three RNNG variants have different generalization properties for number agreement, with the top-down traversal strategy performing best for cases with multiple attractors."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"4.3",
"5"
],
"paper_header_content": [
"Introduction",
"Number Agreement with LSTM Language Models",
"Number Agreement with RNNGs",
"Recurrent Neural Network Grammars",
"Experiments",
"Further Analysis",
"Top-Down, Left-Corner, and Bottom-Up Traversals",
"Bottom-Up Traversal",
"Left-Corner Traversal",
"Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-30#paper-1041#slide-29 | Bottom Up Traversal | x, y: (S (NP the hungry cat) (VP meows))
LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018)
Action: REDUCE-1-VP Topmost stack element | x, y: (S (NP the hungry cat) (VP meows))
LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018)
Action: REDUCE-1-VP Topmost stack element | [] |
GEM-SciDuet-train-30#paper-1041#slide-30 | 1041 | LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modeling Structure Makes Them Better | Language exhibits hierarchical structure, but recent work using a subject-verb agreement diagnostic argued that state-ofthe-art language models, LSTMs, fail to learn long-range syntax-sensitive dependencies. Using the same diagnostic, we show that, in fact, LSTMs do succeed in learning such dependencies-provided they have enough capacity. We then explore whether models that have access to explicit syntactic information learn agreement more effectively, and how the way in which this structural information is incorporated into the model impacts performance. We find that the mere presence of syntactic information does not improve accuracy, but when model architecture is determined by syntax, number agreement is improved. Further, we find that the choice of how syntactic structure is built affects how well number agreement is learned: top-down construction outperforms leftcorner and bottom-up variants in capturing long-distance structural dependencies. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189
],
"paper_content_text": [
"Introduction Recurrent neural networks (RNNs) are remarkably effective models of sequential data.",
"Recent years have witnessed the widespread adoption of recurrent architectures such as LSTMs (Hochreiter and Schmidhuber, 1997) in various NLP tasks, with state of the art results in language modeling and conditional generation tasks like machine translation (Bahdanau et al., 2015) and text summarization (See et al., 2017) .",
"Here we revisit the question asked by Linzen et al.",
"(2016) : as RNNs model word sequences without explicit notions of hierarchical structure, Figure 1 : An example of the number agreement task with two attractors and a subject-verb distance of five.",
"to what extent are these models able to learn non-local syntactic dependencies in natural language?",
"Identifying number agreement between subjects and verbs-especially in the presence of attractors-can be understood as a cognitivelymotivated probe that seeks to distinguish hierarchical theories from sequential ones, as models that rely on sequential cues like the most recent noun would favor the incorrect verb form.",
"We provide an example of this task in Fig.",
"1 , where the plural form of the verb have agrees with the distant subject parts, rather than the adjacent attractors (underlined) of the singular form.",
"Contrary to the findings of Linzen et al.",
"(2016) , our experiments suggest that sequential LSTMs are able to capture structural dependencies to a large extent, even for cases with multiple attractors ( §2).",
"Our finding suggests that network capacity plays a crucial role in capturing structural dependencies with multiple attractors.",
"Nevertheless, we find that a strong character LSTM language model-which lacks explicit word representation and has to capture much longer sequential dependencies in order to learn non-local structural dependencies effectively-performs much worse in the number agreement task.",
"Given the strong performance of word-based LSTM language models, are there are any substantial benefits, in terms of number agreement accuracy, to explicitly modeling hierarchical structures as an inductive bias?",
"We discover that a certain class of LSTM language models that explicitly models syntactic structures, the recurrent neural network grammars (Dyer et al., 2016, RN-NGs) , considerably outperforms sequential LSTM language models for cases with multiple attractors ( §3).",
"We present experiments affirming that this gain is due to an explicit composition operator rather than the presence of predicted syntactic annotations.",
"Rather surprisingly, syntactic LSTM language models without explicit composition have no advantage over sequential LSTMs that operate on word sequences, although these models can nevertheless be excellent predictors of phrase structures (Choe and Charniak, 2016) .",
"Having established the importance of modeling structures, we explore the hypothesis that how we build the structure affects the model's ability to identify structural dependencies in English.",
"As RNNGs build phrase-structure trees through top-down operations, we propose extensions to the structure-building sequences and model architecture that enable left-corner (Henderson, 2003 (Henderson, , 2004 and bottom-up (Chelba and Jelinek, 2000; Emami and Jelinek, 2005) generation orders ( §4).",
"Extensive prior work has characterized topdown, left-corner, and bottom-up parsing strategies in terms of cognitive plausibility (Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992) and neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we move away from the realm of parsing and evaluate the three strategies as models of generation instead, and address the following empirical question: which generation order is most appropriately biased to model structural dependencies in English, as indicated by number agreement accuracy?",
"Our key finding is that the top-down generation outperforms left-corner and bottom-up variants for difficult cases with multiple attractors.",
"In theory, the three traversal strategies approximate the same chain rule that decompose the joint probability of words and phrase-structure trees, denoted as p(x, y), differently and as such will impose different biases on the learner.",
"In §4.3, we show that the three variants achieve similar perplexities on a held-out validation set.",
"As we observe different patterns in number agreement, this demonstrates that while perplexity can be a useful diagnostic tool, it may not be sensitive enough for comparing models in terms of how well they capture grammatical intuitions.",
"Number Agreement with LSTM Language Models We revisit the number agreement task with LSTMs trained on language modeling objectives, as proposed by Linzen et al.",
"(2016) .",
"Experimental Settings.",
"We use the same parsed Wikipedia corpus, verb inflectors, preprocessing steps, and dataset split as Linzen et al.",
"(2016) .",
"1 Word types beyond the most frequent 10,000 are converted to their respective POS tags.",
"We summarize the corpus statistics of the dataset, along with the test set distribution of the number of attractors, in Table 1 .",
"Similar to Linzen et al.",
"(2016) , we only include test cases where all intervening nouns are of the opposite number forms than the subject noun.",
"All models are implemented using the DyNet library (Neubig et al., 2017) .",
"Training was done using a language modeling objective that predicts the next word given the prefix; at test time we compute agreement error rates by comparing the probability of the correct verb form with the incorrect one.",
"We report performance of a few different LSTM hidden layer configurations, while other hyper-parameters are selected based on a grid search.",
"2 Following Linzen Table 2 indicates that, given enough capacity, LSTM language models without explicit syntactic supervision are able to perform well in number agreement.",
"For cases with multiple attractors, we observe that the LSTM language model with 50 hidden units trails behind its larger counterparts by a substantial margin despite comparable performance for zero attractor cases, suggesting that network capacity plays an especially important role in propagating relevant structural information across a large number of steps.",
"5 Our experiment independently derives the same finding as the recent work of Gulordava et al.",
"(2018) , who also find that LSTMs trained with language modeling objectives are able to learn number agreement well; here we additionally identify model capacity as one of the reasons for the discrepancy with the Linzen et al.",
"(2016) results.",
"While the pretrained large-scale language model of Jozefowicz et al.",
"(2016) has certain advantages in terms of model capacity, more training data, and richer vocabulary, we suspect that the poorer performance is due to differences between their training domain and the number agreement testing domain, although the model still performs reasonably well in the number agreement test set.",
"Prior work has confirmed the notion that, in many cases, statistical models are able to achieve good performance under some aggregate metric by overfitting to patterns that are predictive in most cases, often at the expense of more difficult, infrequent instances that require deeper language understanding abilities (Rimell et al., 2009; Jia and Liang, 2017) .",
"In the vast majority of cases, structural dependencies between subjects and verbs highly overlap with sequential dependencies (Table 1).",
"Nevertheless, the fact that number agreement accuracy gets worse as the number of attractors increases is consistent with a sequential recency bias in LSTMs: under this conjecture, identifying the correct structural dependency becomes harder when there are more adjacent nouns of different number forms than the true subject.",
"If the sequential recency conjecture is correct, then LSTMs would perform worse when the structural dependency is more distant in the sequences, compared to cases where the structural dependency is more adjacent.",
"We empirically test this conjecture by running a strong character-based LSTM language model of that achieved state of the art results on EnWiki8 from the Hutter Prize dataset (Hutter, 2012) , with 1,800 hidden units and 10 million parameters.",
"The character LSTM is trained, validated, and tested 6 on the same split of the Linzen et al.",
"(2016) number agreement dataset.",
"A priori, we expect that number agreement is harder for character LSTMs for two reasons.",
"First, character LSTMs lack explicit word representaforms much better for cases with multiple attractors.",
"tions, thus succeeding in this task requires identifying structural dependencies between two sequences of character tokens, while word-based LSTMs only need to resolve dependencies between word tokens.",
"Second, by nature of modeling characters, non-local structural dependencies are sequentially further apart than in the wordbased language model.",
"On the other hand, character LSTMs have the ability to exploit and share informative morphological cues, such as the fact that plural nouns in English tend to end with 's'.",
"As demonstrated on the last row of Table 2 , we find that the character LSTM language model performs much worse at number agreement with multiple attractors compared to its word-based counterparts.",
"This finding is consistent with that of Sennrich (2017) , who find that character-level decoders in neural machine translation perform worse than subword models in capturing morphosyntactic agreement.",
"To some extent, our finding demonstrates the limitations that character LSTMs face in learning structure from language modeling objectives, despite earlier evidence that character LSTM language models are able to implicitly acquire a lexicon (Le Godais et al., 2017) .",
"Number Agreement with RNNGs Given the strong performance of sequential LSTMs in number agreement, is there any further benefit to explicitly modeling hierarchical structures?",
"We focus on recurrent neural network grammars (Dyer et al., 2016, RNNGs) , which jointly model the probability of phrase-structure trees and strings, p(x, y), through structurebuilding actions and explicit compositions for representing completed constituents.",
"Our choice of RNNGs is motivated by the findings of Kuncoro et al.",
"(2017) , who find evidence for syntactic headedness in RNNG phrasal representations.",
"Intuitively, the ability to learn heads is beneficial for this task, as the representation for the noun phrase \"The flowers in the vase\" would be similar to the syntactic head flowers rather than vase.",
"In some sense, the composition operator can be understood as injecting a structural recency bias into the model design, as subjects and verbs that are sequentially apart are encouraged to be close together in the RNNGs' representation.",
"Recurrent Neural Network Grammars RNNGs (Dyer et al., 2016) are language models that estimate the joint probability of string terminals and phrase-structure tree nonterminals.",
"Here we use stack-only RNNGs that achieve better perplexity and parsing performance (Kuncoro et al., 2017) .",
"Given the current stack configuration, the objective function of RNNGs is to predict the correct structure-building operation according to a top-down, left-to-right traversal of the phrasestructure tree; a partial traversal for the input sentence \"The flowers in the vase are blooming\" is illustrated in Fig.",
"3(a) .",
"7 The structural inductive bias of RNNGs derives from an explicit composition operator that represents completed constituents; for instance, the constituent (NP The flowers) is represented by a single composite element on the stack, rather than as four separate symbols.",
"During each REDUCE action, the topmost stack elements that belong to the new constituent are popped from the stack and then composed by the composition function; the composed symbol is then pushed back into the stack.",
"The model is trained in an end-to-end manner by minimizing the cross-entropy loss relative to a sample of gold trees.",
"Experiments Here we summarize the experimental settings of running RNNGs on the number agreement dataset and discuss the empirical findings.",
"Experimental settings.",
"We obtain phrasestructure trees for the Linzen et al.",
"(2016) dataset using a publicly available discriminative model 8 trained on the Penn Treebank (Marcus et al., 1993) .",
"At training time, we use these predicted trees to derive action sequences on the training set, and train the RNNG model on these sequences.",
"9 At test time, we compare the probabilities of the correct and incorrect verb forms given the prefix, which now includes both nonterminal and terminal symbols.",
"An example of the stack contents (i.e.",
"the prefix) when predicting the verb is provided in Fig.",
"3(a) .",
"We similarly run a grid search over the same hyper-parameter range as the sequential LSTM and compare the results with the strongest sequential LSTM baseline from §2.",
"Figure 2 : Number agreement error rates for sequential LSTM language models (left), sequential syntactic LSTM language models (Choe and Charniak, 2016, center), and RNNGs (right).",
"Discussion.",
"Fig.",
"2 shows that RNNGs (rightmost) achieve much better number agreement accuracy compared to LSTM language models (leftmost) for difficult cases with four and five attractors, with around 30% error rate reductions, along with a 13% error rate reduction (from 9% to 7.8%) for three attractors.",
"We attribute the slightly worse performance of RNNGs on cases with zero and one attractor to the presence of intervening structure-building actions that separate the subject and the verb, such as a REDUCE (step 6 in Fig.",
"3(a) ) action to complete the noun phrase and at least one action to predict a verb phrase (step 15 in Fig.",
"3(a) ) before the verb itself is introduced, while LSTM language models benefit from shorter dependencies for zero and one attractor cases.",
"The performance gain of RNNGs might arise from two potential causes.",
"First, RNNGs have access to predicted syntactic annotations, while LSTM language models operate solely on word sequences.",
"Second, RNNGs incorporate explicit compositions, which encourage hierarhical representations and potentially the discovery of syntactic (rather than sequential) dependencies.",
"Would LSTMs that have access to syntactic annotations, but without the explicit composition function, benefit from the same performance gain as RNNGs?",
"To answer this question, we run sequential LSTMs over the same phrase-structure trees (Choe and Charniak, 2016) , similarly estimating the joint probability of phrase-structure nonterminals and string terminals but without an explicit composition operator.",
"Taking the example in Fig.",
"3(a) , the sequential syntactic LSTM would have fifteen 10 symbols on the LSTM when predicting the verb, as opposed to three symbols in the case of RNNGs' stack LSTM.",
"In theory, the sequential LSTM over the phrase-structure trees (Choe and Charniak, 2016) may be able to incorporate a similar, albeit implicit, composition process as RNNGs and consequently derive similarly syntactic heads, although there is no inductive bias that explicitly encourages such process.",
"Fig.",
"2 suggests that the sequential syntactic LSTMs (center) perform comparably with sequential LSTMs without syntax for multiple attractor cases, and worse than RNNGs for nearly all attractors; the gap is highest for multiple attractors.",
"This result showcases the importance of an explicit composition operator and hierarchical representations in identifying structural dependencies, as indicated by number agreement accuracy.",
"Our finding is consistent with the recent work of Yogatama et al.",
"(2018) , who find that introducing elements of hierarchical modeling through a stackstructured memory is beneficial for number agreement, outperforming LSTM language models and attention-augmented variants by increasing margins as the number of attractor grows.",
"Further Analysis In order to better interpret the results, we conduct further analysis into the perplexities of each model, followed by a discussion on the effect of incrementality constraints on the RNNG when predicting number agreement.",
"Perplexity.",
"To what extent does the success of RNNGs in the number agreement task with multiple attractors correlate with better performance under the perplexity metric?",
"We answer this question by using an importance sampling marginalization procedure (Dyer et al., 2016) to obtain an estimate of p(x) under both RNNGs and the sequential syntactic LSTM model.",
"Following Dyer et al.",
"(2016) , for each sentence on the validation set we sample 100 candidate trees from a discriminative model 11 as our proposal distribution.",
"As demonstrated in Table 3 , the LSTM language model has the lowest validation set perplexity despite substantially worse performance than RN-NGs in number agreement with multiple attractors, suggesting that lower perplexity is not neces-sarily correlated with number agreement success.",
"Incrementality constraints.",
"As the syntactic prefix was derived from a discriminative model that has access to unprocessed words, one potential concern is that this prefix might violate the incrementality constraints and benefit the RNNG over the LSTM language model.",
"To address this concern, we remark that the empirical evidence from Fig.",
"2 and Table 3 indicates that the LSTM language model without syntactic annotation outperforms the sequential LSTM with syntactic annotation in terms of both perplexity and number agreement throughout nearly all attractor settings, suggesting that the predicted syntactic prefix does not give any unfair advantage to the syntactic models.",
"Furthermore, we run an experiment where the syntactic prefix is instead derived from an incremental beam search procedure of Fried et al.",
"(2017) .",
"12 To this end, we take the highest scoring beam entry at the time that the verb is generated to be the syntactic prefix; this procedure is applied to both the correct and incorrect verb forms.",
"13 We then similarly compare the probabilities of the correct and incorrect verb form given each respective syntactic prefix to obtain number agreement accuracy.",
"Our finding suggests that using the fully incremental tree prefix leads to even better RNNG number agreement performance for four and five attractors, achieving 7.1% and 8.2% error rates, respectively, compared to 9.4% and 12% for the RNNG error rates in Fig.",
"2 .",
"Top-Down, Left-Corner, and Bottom-Up Traversals In this section, we propose two new variants of RNNGs that construct trees using a different con- 12 As the beam search procedure is time-consuming, we randomly sample 500 cases for each attractor and compute the number agreement accuracy on these samples.",
"13 Consequently, the correct and incorrect forms of the sentence might have different partial trees, as the highest scoring beam entries may be different for each alternative.",
"struction order than the top-down, left-to-right order used above.",
"These are a bottom-up construction order ( §4.1) and a left-corner construction order ( §4.2), analogous to the well-known parsing strategies (e.g.",
"Hale, 2014, chapter 3).",
"They differ from these classic strategies insofar as they do not announce the phrase-structural content of an entire branch at the same time, adopting instead a node-by-node enumeration reminescent of Markov Grammars (Charniak, 1997) .",
"This stepby-step arrangement extends to the derived string as well; since all variants generate words from left to right, the models can be compared using number agreement as a diagnostic.",
"14 Here we state our hypothesis on why the build order matters.",
"The three generation strategies represent different chain rule decompositions of the joint probability of strings and phrase-structure trees, thereby imposing different biases on the learner.",
"Earlier work in parsing has characterized the plausibility of top-down, left-corner, and bottom-up strategies as viable candidates of human sentence processing, especially in terms of memory constraints and human difficulties with center embedding constructions (Johnson-Laird, 1983; Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992 , inter alia), along with neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we cast the three strategies as models of language generation (Manning and Carpenter, 1997), and focus on the empirical question: which generation order has the most appropriate bias in modeling non-local structural dependencies in English?",
"These alternative orders organize the learning problem so as to yield intermediate states in generation that condition on different aspects of the grammatical structure.",
"In number agreement, this amounts to making an agreement controller, such as the word flowers in Fig.",
"3 , more or less salient.",
"If it is more salient, the model should be better-able to inflect the main verb in agreement with this controller, without getting distracted by the attractors.",
"The three proposed build orders are compared in Fig.",
"3 , showing the respective configurations (i.e.",
"the prefix) when generating the main verb in a sentence with a single attractor.",
"15 In ad-dition, we show concrete action sequences for a simpler sentence in each section.",
"Bottom-Up Traversal In bottom-up traversals, phrases are recursively constructed and labeled with the nonterminal type once all their daughters have been built, as illustrated in Fig.",
"4 .",
"Bottom-up traversals benefit from shorter stack depths compared to top-down due to the lack of incomplete nonterminals.",
"As the commitment to label the nonterminal type of a phrase is delayed until its constituents are complete, this means that the generation of a child node cannot condition on the label of its parent node.",
"In n-ary branching trees, bottom-up completion of constituents requires a procedure for determining how many of the most recent elements on the stack should be daughters of the node that is being constructed.",
"16 Conceptually, rather than having a single REDUCE operation as we have before, we have a complex REDUCE(X, n) operation that must determine the type of the constituent (i.e., X) as well as the number of daughters (i.e., n).",
"In step 5 of Fig.",
"4 , the newly formed NP constituent only covers the terminal worms, and neither the unattached terminal eats nor the constituent (NP The fox) is part of the new noun phrase.",
"We implement this extent decision using a stick-breaking construction-using the stack LSTM encoding, a single-layer feedforward network, and a logistic output layer-which decides whether the top element on the stack should be the leftmost child of the new constituent (i.e.",
"whether or not the new constituent is complete after popping the current topmost stack element), as illustrated in Fig.",
"5 .",
"If not, the process is then repeated after the topmost stack element is popped.",
"Once the extent of the new nonterminal has been decided, we parameterize the decision of the nonterminal label type; in Fig.",
"5 this is an NP.",
"A second difference to top-down generation is that when a single constituent remains on the stack, the sentence is not necessarily complete (see step 3 of Fig.",
"4 for examples where this happens).",
"We thus introduce an explicit STOP action (step 8, Fig.",
"4) , indicating the tree is complete, which is only assigned non-zero probability when the stack has a during the history of the full generation process vary considerably in the invariances and the kinds of actions they predict.",
"16 This mechanism is not necessary with strictly binary branching trees, since each new nonterminal always consists of the two children at the top of the stack.",
"Left-Corner Traversal Left-corner traversals combine some aspects of top-down and bottom-up processing.",
"As illustrated in Fig.",
"6 , this works by first generating the leftmost terminal of the tree, The (step 0), before proceeding bottom-up to predict its parent NP (step 1) and then top-down to predict the rest of its children (step 2).",
"A REDUCE action similarly calls the composition operator once the phrase is complete (e.g.",
"step 3).",
"The complete constituent (NP The fox) is the leftmost child of its parent node, thus an NT SW(S) action is done next (step 4).",
"The NT SW(X) action is similar to the NT(X) from the top-down generator, in that it introduces an open nonterminal node and must be matched later by a corresponding REDUCE operation, but, in addition, swaps the two topmost elements at the top of the stack.",
"This is necessary because the parent nonterminal node is not built until after its left-most child has been constructed.",
"In step 1 of Fig.",
"6 , with a single element The on the stack, the action NT SW(NP) adds the open nonterminal symbol NP to become the topmost stack element, but after applying the swap operator the stack now contains (NP | The (step 2).",
"Experiments We optimize the hyper-parameters of each RNNG variant using grid searches based on validation set perplexity.",
"Table 4 summarizes average stack depths and perplexities 17 on the Linzen et al.",
"(2016) validation set.",
"We evaluate each of the variants in terms of number agreement accuracy as an evidence of its suitability to model structural dependencies in English, presented in Table 5 .",
"To account for randomness in training, we report the error rate summary statistics of ten different runs.",
"Avg.",
"(±sdev)/min/max n=2 n=3 n=4 LM 5.8(±0.2)/5.5/6.0 9.6(±0.7)/8.8/10.1 14.1(±1.2)/13.0/15.3 TD 5.5(±0.4)/4.9/5.8 7.8(±0.6)/7.4/8.0 8.9(±1.1)/7.9/9.8 LC 5.4(±0.3)/5.2/5.5 8.2(±0.4)/7.9/8.7 9.9(±1.3)/8.8/11.5 BU 5.7(±0.3) 5.5/5.8 8.5(±0.7)/8.0/9.3 9.7(±1.1)/9.0/11.3 Table 5 : Number agreement error rates for topdown (TD), left-corner (LC), and bottom-up (BU) RNNGs, broken down by the number of attractors.",
"LM indicates the best sequential language model baseline ( §2).",
"We report the mean, standard deviation, and minimum/maximum of 10 different random seeds of each model.",
"Discussion.",
"In Table 5 , we focus on empirical results for cases where the structural dependencies matter the most, corresponding to cases with two, three, and four attractors.",
"All three RNNG variants outperform the sequential LSTM language model baseline for these cases.",
"Nevertheless, the top-down variant outperforms both left-corner and bottom-up strategies for difficult cases with three or more attractors, suggesting that the top-down strategy is most appropriately biased to model difficult number agreement dependencies in English.",
"We run an approximate randomization test by stratifying the output and permuting within each stratum (Yeh, 2000) and find that, for four attractors, the performance difference between the top-down RNNG and the other variants is statistically significant at p < 0.05.",
"The success of the top-down traversal in the domain of number-agreement prediction is consistent with a classical view in parsing that argues top-down parsing is the most human-like parsing strategy since it is the most anticipatory.",
"Only anticipatory representations, it is said, could explain the rapid, incremental processing that humans seem to exhibit (Marslen-Wilson, 1973; Tanenhaus et al., 1995) ; this line of thinking similarly motivates Charniak (2010) , among others.",
"While most work in this domain has been concerned with the parsing problem, our findings suggest that anticipatory mechanisms are also beneficial in capturing structural dependencies in language modeling.",
"We note that our results are achieved using models that, in theory, are able to condition on the entire derivation history, while earlier work in sentence processing has focused on cognitive memory considerations, such as the memory-bounded model of Schuler et al.",
"(2010) .",
"Conclusion Given enough capacity, LSTMs trained on language modeling objectives are able to learn syntax-sensitive dependencies, as evidenced by accurate number agreement accuracy with multiple attractors.",
"Despite this strong performance, we discover explicit modeling of structure does improve the model's ability to discover non-local structural dependencies when determining the distribution over subsequent word generation.",
"Recurrent neural network grammars (RNNGs), which jointly model phrase-structure trees and strings and employ an explicit composition operator, substantially outperform LSTM language models and syntactic language models without explicit compositions; this highlights the importance of a hierarchical inductive bias in capturing structural dependencies.",
"We explore the possibility that how the structure is built affects number agreement performance.",
"Through novel extensions to RNNGs that enable the use of left-corner and bottom-up generation strategies, we discover that this is indeed the case: the three RNNG variants have different generalization properties for number agreement, with the top-down traversal strategy performing best for cases with multiple attractors."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"4.3",
"5"
],
"paper_header_content": [
"Introduction",
"Number Agreement with LSTM Language Models",
"Number Agreement with RNNGs",
"Recurrent Neural Network Grammars",
"Experiments",
"Further Analysis",
"Top-Down, Left-Corner, and Bottom-Up Traversals",
"Bottom-Up Traversal",
"Left-Corner Traversal",
"Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-30#paper-1041#slide-30 | Bottom Up Traversal After REDUCE 1 VP | x, y: (S (NP the hungry cat) (VP meows))
LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | x, y: (S (NP the hungry cat) (VP meows))
LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | [] |
GEM-SciDuet-train-30#paper-1041#slide-31 | 1041 | LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modeling Structure Makes Them Better | Language exhibits hierarchical structure, but recent work using a subject-verb agreement diagnostic argued that state-ofthe-art language models, LSTMs, fail to learn long-range syntax-sensitive dependencies. Using the same diagnostic, we show that, in fact, LSTMs do succeed in learning such dependencies-provided they have enough capacity. We then explore whether models that have access to explicit syntactic information learn agreement more effectively, and how the way in which this structural information is incorporated into the model impacts performance. We find that the mere presence of syntactic information does not improve accuracy, but when model architecture is determined by syntax, number agreement is improved. Further, we find that the choice of how syntactic structure is built affects how well number agreement is learned: top-down construction outperforms leftcorner and bottom-up variants in capturing long-distance structural dependencies. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189
],
"paper_content_text": [
"Introduction Recurrent neural networks (RNNs) are remarkably effective models of sequential data.",
"Recent years have witnessed the widespread adoption of recurrent architectures such as LSTMs (Hochreiter and Schmidhuber, 1997) in various NLP tasks, with state of the art results in language modeling and conditional generation tasks like machine translation (Bahdanau et al., 2015) and text summarization (See et al., 2017) .",
"Here we revisit the question asked by Linzen et al.",
"(2016) : as RNNs model word sequences without explicit notions of hierarchical structure, Figure 1 : An example of the number agreement task with two attractors and a subject-verb distance of five.",
"to what extent are these models able to learn non-local syntactic dependencies in natural language?",
"Identifying number agreement between subjects and verbs-especially in the presence of attractors-can be understood as a cognitivelymotivated probe that seeks to distinguish hierarchical theories from sequential ones, as models that rely on sequential cues like the most recent noun would favor the incorrect verb form.",
"We provide an example of this task in Fig.",
"1 , where the plural form of the verb have agrees with the distant subject parts, rather than the adjacent attractors (underlined) of the singular form.",
"Contrary to the findings of Linzen et al.",
"(2016) , our experiments suggest that sequential LSTMs are able to capture structural dependencies to a large extent, even for cases with multiple attractors ( §2).",
"Our finding suggests that network capacity plays a crucial role in capturing structural dependencies with multiple attractors.",
"Nevertheless, we find that a strong character LSTM language model-which lacks explicit word representation and has to capture much longer sequential dependencies in order to learn non-local structural dependencies effectively-performs much worse in the number agreement task.",
"Given the strong performance of word-based LSTM language models, are there are any substantial benefits, in terms of number agreement accuracy, to explicitly modeling hierarchical structures as an inductive bias?",
"We discover that a certain class of LSTM language models that explicitly models syntactic structures, the recurrent neural network grammars (Dyer et al., 2016, RN-NGs) , considerably outperforms sequential LSTM language models for cases with multiple attractors ( §3).",
"We present experiments affirming that this gain is due to an explicit composition operator rather than the presence of predicted syntactic annotations.",
"Rather surprisingly, syntactic LSTM language models without explicit composition have no advantage over sequential LSTMs that operate on word sequences, although these models can nevertheless be excellent predictors of phrase structures (Choe and Charniak, 2016) .",
"Having established the importance of modeling structures, we explore the hypothesis that how we build the structure affects the model's ability to identify structural dependencies in English.",
"As RNNGs build phrase-structure trees through top-down operations, we propose extensions to the structure-building sequences and model architecture that enable left-corner (Henderson, 2003 (Henderson, , 2004 and bottom-up (Chelba and Jelinek, 2000; Emami and Jelinek, 2005) generation orders ( §4).",
"Extensive prior work has characterized topdown, left-corner, and bottom-up parsing strategies in terms of cognitive plausibility (Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992) and neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we move away from the realm of parsing and evaluate the three strategies as models of generation instead, and address the following empirical question: which generation order is most appropriately biased to model structural dependencies in English, as indicated by number agreement accuracy?",
"Our key finding is that the top-down generation outperforms left-corner and bottom-up variants for difficult cases with multiple attractors.",
"In theory, the three traversal strategies approximate the same chain rule that decompose the joint probability of words and phrase-structure trees, denoted as p(x, y), differently and as such will impose different biases on the learner.",
"In §4.3, we show that the three variants achieve similar perplexities on a held-out validation set.",
"As we observe different patterns in number agreement, this demonstrates that while perplexity can be a useful diagnostic tool, it may not be sensitive enough for comparing models in terms of how well they capture grammatical intuitions.",
"Number Agreement with LSTM Language Models We revisit the number agreement task with LSTMs trained on language modeling objectives, as proposed by Linzen et al.",
"(2016) .",
"Experimental Settings.",
"We use the same parsed Wikipedia corpus, verb inflectors, preprocessing steps, and dataset split as Linzen et al.",
"(2016) .",
"1 Word types beyond the most frequent 10,000 are converted to their respective POS tags.",
"We summarize the corpus statistics of the dataset, along with the test set distribution of the number of attractors, in Table 1 .",
"Similar to Linzen et al.",
"(2016) , we only include test cases where all intervening nouns are of the opposite number forms than the subject noun.",
"All models are implemented using the DyNet library (Neubig et al., 2017) .",
"Training was done using a language modeling objective that predicts the next word given the prefix; at test time we compute agreement error rates by comparing the probability of the correct verb form with the incorrect one.",
"We report performance of a few different LSTM hidden layer configurations, while other hyper-parameters are selected based on a grid search.",
"2 Following Linzen Table 2 indicates that, given enough capacity, LSTM language models without explicit syntactic supervision are able to perform well in number agreement.",
"For cases with multiple attractors, we observe that the LSTM language model with 50 hidden units trails behind its larger counterparts by a substantial margin despite comparable performance for zero attractor cases, suggesting that network capacity plays an especially important role in propagating relevant structural information across a large number of steps.",
"5 Our experiment independently derives the same finding as the recent work of Gulordava et al.",
"(2018) , who also find that LSTMs trained with language modeling objectives are able to learn number agreement well; here we additionally identify model capacity as one of the reasons for the discrepancy with the Linzen et al.",
"(2016) results.",
"While the pretrained large-scale language model of Jozefowicz et al.",
"(2016) has certain advantages in terms of model capacity, more training data, and richer vocabulary, we suspect that the poorer performance is due to differences between their training domain and the number agreement testing domain, although the model still performs reasonably well in the number agreement test set.",
"Prior work has confirmed the notion that, in many cases, statistical models are able to achieve good performance under some aggregate metric by overfitting to patterns that are predictive in most cases, often at the expense of more difficult, infrequent instances that require deeper language understanding abilities (Rimell et al., 2009; Jia and Liang, 2017) .",
"In the vast majority of cases, structural dependencies between subjects and verbs highly overlap with sequential dependencies (Table 1).",
"Nevertheless, the fact that number agreement accuracy gets worse as the number of attractors increases is consistent with a sequential recency bias in LSTMs: under this conjecture, identifying the correct structural dependency becomes harder when there are more adjacent nouns of different number forms than the true subject.",
"If the sequential recency conjecture is correct, then LSTMs would perform worse when the structural dependency is more distant in the sequences, compared to cases where the structural dependency is more adjacent.",
"We empirically test this conjecture by running a strong character-based LSTM language model of that achieved state of the art results on EnWiki8 from the Hutter Prize dataset (Hutter, 2012) , with 1,800 hidden units and 10 million parameters.",
"The character LSTM is trained, validated, and tested 6 on the same split of the Linzen et al.",
"(2016) number agreement dataset.",
"A priori, we expect that number agreement is harder for character LSTMs for two reasons.",
"First, character LSTMs lack explicit word representaforms much better for cases with multiple attractors.",
"tions, thus succeeding in this task requires identifying structural dependencies between two sequences of character tokens, while word-based LSTMs only need to resolve dependencies between word tokens.",
"Second, by nature of modeling characters, non-local structural dependencies are sequentially further apart than in the wordbased language model.",
"On the other hand, character LSTMs have the ability to exploit and share informative morphological cues, such as the fact that plural nouns in English tend to end with 's'.",
"As demonstrated on the last row of Table 2 , we find that the character LSTM language model performs much worse at number agreement with multiple attractors compared to its word-based counterparts.",
"This finding is consistent with that of Sennrich (2017) , who find that character-level decoders in neural machine translation perform worse than subword models in capturing morphosyntactic agreement.",
"To some extent, our finding demonstrates the limitations that character LSTMs face in learning structure from language modeling objectives, despite earlier evidence that character LSTM language models are able to implicitly acquire a lexicon (Le Godais et al., 2017) .",
"Number Agreement with RNNGs Given the strong performance of sequential LSTMs in number agreement, is there any further benefit to explicitly modeling hierarchical structures?",
"We focus on recurrent neural network grammars (Dyer et al., 2016, RNNGs) , which jointly model the probability of phrase-structure trees and strings, p(x, y), through structurebuilding actions and explicit compositions for representing completed constituents.",
"Our choice of RNNGs is motivated by the findings of Kuncoro et al.",
"(2017) , who find evidence for syntactic headedness in RNNG phrasal representations.",
"Intuitively, the ability to learn heads is beneficial for this task, as the representation for the noun phrase \"The flowers in the vase\" would be similar to the syntactic head flowers rather than vase.",
"In some sense, the composition operator can be understood as injecting a structural recency bias into the model design, as subjects and verbs that are sequentially apart are encouraged to be close together in the RNNGs' representation.",
"Recurrent Neural Network Grammars RNNGs (Dyer et al., 2016) are language models that estimate the joint probability of string terminals and phrase-structure tree nonterminals.",
"Here we use stack-only RNNGs that achieve better perplexity and parsing performance (Kuncoro et al., 2017) .",
"Given the current stack configuration, the objective function of RNNGs is to predict the correct structure-building operation according to a top-down, left-to-right traversal of the phrasestructure tree; a partial traversal for the input sentence \"The flowers in the vase are blooming\" is illustrated in Fig.",
"3(a) .",
"7 The structural inductive bias of RNNGs derives from an explicit composition operator that represents completed constituents; for instance, the constituent (NP The flowers) is represented by a single composite element on the stack, rather than as four separate symbols.",
"During each REDUCE action, the topmost stack elements that belong to the new constituent are popped from the stack and then composed by the composition function; the composed symbol is then pushed back into the stack.",
"The model is trained in an end-to-end manner by minimizing the cross-entropy loss relative to a sample of gold trees.",
"Experiments Here we summarize the experimental settings of running RNNGs on the number agreement dataset and discuss the empirical findings.",
"Experimental settings.",
"We obtain phrasestructure trees for the Linzen et al.",
"(2016) dataset using a publicly available discriminative model 8 trained on the Penn Treebank (Marcus et al., 1993) .",
"At training time, we use these predicted trees to derive action sequences on the training set, and train the RNNG model on these sequences.",
"9 At test time, we compare the probabilities of the correct and incorrect verb forms given the prefix, which now includes both nonterminal and terminal symbols.",
"An example of the stack contents (i.e.",
"the prefix) when predicting the verb is provided in Fig.",
"3(a) .",
"We similarly run a grid search over the same hyper-parameter range as the sequential LSTM and compare the results with the strongest sequential LSTM baseline from §2.",
"Figure 2 : Number agreement error rates for sequential LSTM language models (left), sequential syntactic LSTM language models (Choe and Charniak, 2016, center), and RNNGs (right).",
"Discussion.",
"Fig.",
"2 shows that RNNGs (rightmost) achieve much better number agreement accuracy compared to LSTM language models (leftmost) for difficult cases with four and five attractors, with around 30% error rate reductions, along with a 13% error rate reduction (from 9% to 7.8%) for three attractors.",
"We attribute the slightly worse performance of RNNGs on cases with zero and one attractor to the presence of intervening structure-building actions that separate the subject and the verb, such as a REDUCE (step 6 in Fig.",
"3(a) ) action to complete the noun phrase and at least one action to predict a verb phrase (step 15 in Fig.",
"3(a) ) before the verb itself is introduced, while LSTM language models benefit from shorter dependencies for zero and one attractor cases.",
"The performance gain of RNNGs might arise from two potential causes.",
"First, RNNGs have access to predicted syntactic annotations, while LSTM language models operate solely on word sequences.",
"Second, RNNGs incorporate explicit compositions, which encourage hierarhical representations and potentially the discovery of syntactic (rather than sequential) dependencies.",
"Would LSTMs that have access to syntactic annotations, but without the explicit composition function, benefit from the same performance gain as RNNGs?",
"To answer this question, we run sequential LSTMs over the same phrase-structure trees (Choe and Charniak, 2016) , similarly estimating the joint probability of phrase-structure nonterminals and string terminals but without an explicit composition operator.",
"Taking the example in Fig.",
"3(a) , the sequential syntactic LSTM would have fifteen 10 symbols on the LSTM when predicting the verb, as opposed to three symbols in the case of RNNGs' stack LSTM.",
"In theory, the sequential LSTM over the phrase-structure trees (Choe and Charniak, 2016) may be able to incorporate a similar, albeit implicit, composition process as RNNGs and consequently derive similarly syntactic heads, although there is no inductive bias that explicitly encourages such process.",
"Fig.",
"2 suggests that the sequential syntactic LSTMs (center) perform comparably with sequential LSTMs without syntax for multiple attractor cases, and worse than RNNGs for nearly all attractors; the gap is highest for multiple attractors.",
"This result showcases the importance of an explicit composition operator and hierarchical representations in identifying structural dependencies, as indicated by number agreement accuracy.",
"Our finding is consistent with the recent work of Yogatama et al.",
"(2018) , who find that introducing elements of hierarchical modeling through a stackstructured memory is beneficial for number agreement, outperforming LSTM language models and attention-augmented variants by increasing margins as the number of attractor grows.",
"Further Analysis In order to better interpret the results, we conduct further analysis into the perplexities of each model, followed by a discussion on the effect of incrementality constraints on the RNNG when predicting number agreement.",
"Perplexity.",
"To what extent does the success of RNNGs in the number agreement task with multiple attractors correlate with better performance under the perplexity metric?",
"We answer this question by using an importance sampling marginalization procedure (Dyer et al., 2016) to obtain an estimate of p(x) under both RNNGs and the sequential syntactic LSTM model.",
"Following Dyer et al.",
"(2016) , for each sentence on the validation set we sample 100 candidate trees from a discriminative model 11 as our proposal distribution.",
"As demonstrated in Table 3 , the LSTM language model has the lowest validation set perplexity despite substantially worse performance than RN-NGs in number agreement with multiple attractors, suggesting that lower perplexity is not neces-sarily correlated with number agreement success.",
"Incrementality constraints.",
"As the syntactic prefix was derived from a discriminative model that has access to unprocessed words, one potential concern is that this prefix might violate the incrementality constraints and benefit the RNNG over the LSTM language model.",
"To address this concern, we remark that the empirical evidence from Fig.",
"2 and Table 3 indicates that the LSTM language model without syntactic annotation outperforms the sequential LSTM with syntactic annotation in terms of both perplexity and number agreement throughout nearly all attractor settings, suggesting that the predicted syntactic prefix does not give any unfair advantage to the syntactic models.",
"Furthermore, we run an experiment where the syntactic prefix is instead derived from an incremental beam search procedure of Fried et al.",
"(2017) .",
"12 To this end, we take the highest scoring beam entry at the time that the verb is generated to be the syntactic prefix; this procedure is applied to both the correct and incorrect verb forms.",
"13 We then similarly compare the probabilities of the correct and incorrect verb form given each respective syntactic prefix to obtain number agreement accuracy.",
"Our finding suggests that using the fully incremental tree prefix leads to even better RNNG number agreement performance for four and five attractors, achieving 7.1% and 8.2% error rates, respectively, compared to 9.4% and 12% for the RNNG error rates in Fig.",
"2 .",
"Top-Down, Left-Corner, and Bottom-Up Traversals In this section, we propose two new variants of RNNGs that construct trees using a different con- 12 As the beam search procedure is time-consuming, we randomly sample 500 cases for each attractor and compute the number agreement accuracy on these samples.",
"13 Consequently, the correct and incorrect forms of the sentence might have different partial trees, as the highest scoring beam entries may be different for each alternative.",
"struction order than the top-down, left-to-right order used above.",
"These are a bottom-up construction order ( §4.1) and a left-corner construction order ( §4.2), analogous to the well-known parsing strategies (e.g.",
"Hale, 2014, chapter 3).",
"They differ from these classic strategies insofar as they do not announce the phrase-structural content of an entire branch at the same time, adopting instead a node-by-node enumeration reminescent of Markov Grammars (Charniak, 1997) .",
"This stepby-step arrangement extends to the derived string as well; since all variants generate words from left to right, the models can be compared using number agreement as a diagnostic.",
"14 Here we state our hypothesis on why the build order matters.",
"The three generation strategies represent different chain rule decompositions of the joint probability of strings and phrase-structure trees, thereby imposing different biases on the learner.",
"Earlier work in parsing has characterized the plausibility of top-down, left-corner, and bottom-up strategies as viable candidates of human sentence processing, especially in terms of memory constraints and human difficulties with center embedding constructions (Johnson-Laird, 1983; Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992 , inter alia), along with neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we cast the three strategies as models of language generation (Manning and Carpenter, 1997), and focus on the empirical question: which generation order has the most appropriate bias in modeling non-local structural dependencies in English?",
"These alternative orders organize the learning problem so as to yield intermediate states in generation that condition on different aspects of the grammatical structure.",
"In number agreement, this amounts to making an agreement controller, such as the word flowers in Fig.",
"3 , more or less salient.",
"If it is more salient, the model should be better-able to inflect the main verb in agreement with this controller, without getting distracted by the attractors.",
"The three proposed build orders are compared in Fig.",
"3 , showing the respective configurations (i.e.",
"the prefix) when generating the main verb in a sentence with a single attractor.",
"15 In ad-dition, we show concrete action sequences for a simpler sentence in each section.",
"Bottom-Up Traversal In bottom-up traversals, phrases are recursively constructed and labeled with the nonterminal type once all their daughters have been built, as illustrated in Fig.",
"4 .",
"Bottom-up traversals benefit from shorter stack depths compared to top-down due to the lack of incomplete nonterminals.",
"As the commitment to label the nonterminal type of a phrase is delayed until its constituents are complete, this means that the generation of a child node cannot condition on the label of its parent node.",
"In n-ary branching trees, bottom-up completion of constituents requires a procedure for determining how many of the most recent elements on the stack should be daughters of the node that is being constructed.",
"16 Conceptually, rather than having a single REDUCE operation as we have before, we have a complex REDUCE(X, n) operation that must determine the type of the constituent (i.e., X) as well as the number of daughters (i.e., n).",
"In step 5 of Fig.",
"4 , the newly formed NP constituent only covers the terminal worms, and neither the unattached terminal eats nor the constituent (NP The fox) is part of the new noun phrase.",
"We implement this extent decision using a stick-breaking construction-using the stack LSTM encoding, a single-layer feedforward network, and a logistic output layer-which decides whether the top element on the stack should be the leftmost child of the new constituent (i.e.",
"whether or not the new constituent is complete after popping the current topmost stack element), as illustrated in Fig.",
"5 .",
"If not, the process is then repeated after the topmost stack element is popped.",
"Once the extent of the new nonterminal has been decided, we parameterize the decision of the nonterminal label type; in Fig.",
"5 this is an NP.",
"A second difference to top-down generation is that when a single constituent remains on the stack, the sentence is not necessarily complete (see step 3 of Fig.",
"4 for examples where this happens).",
"We thus introduce an explicit STOP action (step 8, Fig.",
"4) , indicating the tree is complete, which is only assigned non-zero probability when the stack has a during the history of the full generation process vary considerably in the invariances and the kinds of actions they predict.",
"16 This mechanism is not necessary with strictly binary branching trees, since each new nonterminal always consists of the two children at the top of the stack.",
"Left-Corner Traversal Left-corner traversals combine some aspects of top-down and bottom-up processing.",
"As illustrated in Fig.",
"6 , this works by first generating the leftmost terminal of the tree, The (step 0), before proceeding bottom-up to predict its parent NP (step 1) and then top-down to predict the rest of its children (step 2).",
"A REDUCE action similarly calls the composition operator once the phrase is complete (e.g.",
"step 3).",
"The complete constituent (NP The fox) is the leftmost child of its parent node, thus an NT SW(S) action is done next (step 4).",
"The NT SW(X) action is similar to the NT(X) from the top-down generator, in that it introduces an open nonterminal node and must be matched later by a corresponding REDUCE operation, but, in addition, swaps the two topmost elements at the top of the stack.",
"This is necessary because the parent nonterminal node is not built until after its left-most child has been constructed.",
"In step 1 of Fig.",
"6 , with a single element The on the stack, the action NT SW(NP) adds the open nonterminal symbol NP to become the topmost stack element, but after applying the swap operator the stack now contains (NP | The (step 2).",
"Experiments We optimize the hyper-parameters of each RNNG variant using grid searches based on validation set perplexity.",
"Table 4 summarizes average stack depths and perplexities 17 on the Linzen et al.",
"(2016) validation set.",
"We evaluate each of the variants in terms of number agreement accuracy as an evidence of its suitability to model structural dependencies in English, presented in Table 5 .",
"To account for randomness in training, we report the error rate summary statistics of ten different runs.",
"Avg.",
"(±sdev)/min/max n=2 n=3 n=4 LM 5.8(±0.2)/5.5/6.0 9.6(±0.7)/8.8/10.1 14.1(±1.2)/13.0/15.3 TD 5.5(±0.4)/4.9/5.8 7.8(±0.6)/7.4/8.0 8.9(±1.1)/7.9/9.8 LC 5.4(±0.3)/5.2/5.5 8.2(±0.4)/7.9/8.7 9.9(±1.3)/8.8/11.5 BU 5.7(±0.3) 5.5/5.8 8.5(±0.7)/8.0/9.3 9.7(±1.1)/9.0/11.3 Table 5 : Number agreement error rates for topdown (TD), left-corner (LC), and bottom-up (BU) RNNGs, broken down by the number of attractors.",
"LM indicates the best sequential language model baseline ( §2).",
"We report the mean, standard deviation, and minimum/maximum of 10 different random seeds of each model.",
"Discussion.",
"In Table 5 , we focus on empirical results for cases where the structural dependencies matter the most, corresponding to cases with two, three, and four attractors.",
"All three RNNG variants outperform the sequential LSTM language model baseline for these cases.",
"Nevertheless, the top-down variant outperforms both left-corner and bottom-up strategies for difficult cases with three or more attractors, suggesting that the top-down strategy is most appropriately biased to model difficult number agreement dependencies in English.",
"We run an approximate randomization test by stratifying the output and permuting within each stratum (Yeh, 2000) and find that, for four attractors, the performance difference between the top-down RNNG and the other variants is statistically significant at p < 0.05.",
"The success of the top-down traversal in the domain of number-agreement prediction is consistent with a classical view in parsing that argues top-down parsing is the most human-like parsing strategy since it is the most anticipatory.",
"Only anticipatory representations, it is said, could explain the rapid, incremental processing that humans seem to exhibit (Marslen-Wilson, 1973; Tanenhaus et al., 1995) ; this line of thinking similarly motivates Charniak (2010) , among others.",
"While most work in this domain has been concerned with the parsing problem, our findings suggest that anticipatory mechanisms are also beneficial in capturing structural dependencies in language modeling.",
"We note that our results are achieved using models that, in theory, are able to condition on the entire derivation history, while earlier work in sentence processing has focused on cognitive memory considerations, such as the memory-bounded model of Schuler et al.",
"(2010) .",
"Conclusion Given enough capacity, LSTMs trained on language modeling objectives are able to learn syntax-sensitive dependencies, as evidenced by accurate number agreement accuracy with multiple attractors.",
"Despite this strong performance, we discover explicit modeling of structure does improve the model's ability to discover non-local structural dependencies when determining the distribution over subsequent word generation.",
"Recurrent neural network grammars (RNNGs), which jointly model phrase-structure trees and strings and employ an explicit composition operator, substantially outperform LSTM language models and syntactic language models without explicit compositions; this highlights the importance of a hierarchical inductive bias in capturing structural dependencies.",
"We explore the possibility that how the structure is built affects number agreement performance.",
"Through novel extensions to RNNGs that enable the use of left-corner and bottom-up generation strategies, we discover that this is indeed the case: the three RNNG variants have different generalization properties for number agreement, with the top-down traversal strategy performing best for cases with multiple attractors."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"4.3",
"5"
],
"paper_header_content": [
"Introduction",
"Number Agreement with LSTM Language Models",
"Number Agreement with RNNGs",
"Recurrent Neural Network Grammars",
"Experiments",
"Further Analysis",
"Top-Down, Left-Corner, and Bottom-Up Traversals",
"Bottom-Up Traversal",
"Left-Corner Traversal",
"Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-30#paper-1041#slide-31 | Bottom Up Parameterisation of Constituent Extent | LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | [] |
GEM-SciDuet-train-30#paper-1041#slide-32 | 1041 | LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modeling Structure Makes Them Better | Language exhibits hierarchical structure, but recent work using a subject-verb agreement diagnostic argued that state-ofthe-art language models, LSTMs, fail to learn long-range syntax-sensitive dependencies. Using the same diagnostic, we show that, in fact, LSTMs do succeed in learning such dependencies-provided they have enough capacity. We then explore whether models that have access to explicit syntactic information learn agreement more effectively, and how the way in which this structural information is incorporated into the model impacts performance. We find that the mere presence of syntactic information does not improve accuracy, but when model architecture is determined by syntax, number agreement is improved. Further, we find that the choice of how syntactic structure is built affects how well number agreement is learned: top-down construction outperforms leftcorner and bottom-up variants in capturing long-distance structural dependencies. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189
],
"paper_content_text": [
"Introduction Recurrent neural networks (RNNs) are remarkably effective models of sequential data.",
"Recent years have witnessed the widespread adoption of recurrent architectures such as LSTMs (Hochreiter and Schmidhuber, 1997) in various NLP tasks, with state of the art results in language modeling and conditional generation tasks like machine translation (Bahdanau et al., 2015) and text summarization (See et al., 2017) .",
"Here we revisit the question asked by Linzen et al.",
"(2016) : as RNNs model word sequences without explicit notions of hierarchical structure, Figure 1 : An example of the number agreement task with two attractors and a subject-verb distance of five.",
"to what extent are these models able to learn non-local syntactic dependencies in natural language?",
"Identifying number agreement between subjects and verbs-especially in the presence of attractors-can be understood as a cognitivelymotivated probe that seeks to distinguish hierarchical theories from sequential ones, as models that rely on sequential cues like the most recent noun would favor the incorrect verb form.",
"We provide an example of this task in Fig.",
"1 , where the plural form of the verb have agrees with the distant subject parts, rather than the adjacent attractors (underlined) of the singular form.",
"Contrary to the findings of Linzen et al.",
"(2016) , our experiments suggest that sequential LSTMs are able to capture structural dependencies to a large extent, even for cases with multiple attractors ( §2).",
"Our finding suggests that network capacity plays a crucial role in capturing structural dependencies with multiple attractors.",
"Nevertheless, we find that a strong character LSTM language model-which lacks explicit word representation and has to capture much longer sequential dependencies in order to learn non-local structural dependencies effectively-performs much worse in the number agreement task.",
"Given the strong performance of word-based LSTM language models, are there are any substantial benefits, in terms of number agreement accuracy, to explicitly modeling hierarchical structures as an inductive bias?",
"We discover that a certain class of LSTM language models that explicitly models syntactic structures, the recurrent neural network grammars (Dyer et al., 2016, RN-NGs) , considerably outperforms sequential LSTM language models for cases with multiple attractors ( §3).",
"We present experiments affirming that this gain is due to an explicit composition operator rather than the presence of predicted syntactic annotations.",
"Rather surprisingly, syntactic LSTM language models without explicit composition have no advantage over sequential LSTMs that operate on word sequences, although these models can nevertheless be excellent predictors of phrase structures (Choe and Charniak, 2016) .",
"Having established the importance of modeling structures, we explore the hypothesis that how we build the structure affects the model's ability to identify structural dependencies in English.",
"As RNNGs build phrase-structure trees through top-down operations, we propose extensions to the structure-building sequences and model architecture that enable left-corner (Henderson, 2003 (Henderson, , 2004 and bottom-up (Chelba and Jelinek, 2000; Emami and Jelinek, 2005) generation orders ( §4).",
"Extensive prior work has characterized topdown, left-corner, and bottom-up parsing strategies in terms of cognitive plausibility (Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992) and neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we move away from the realm of parsing and evaluate the three strategies as models of generation instead, and address the following empirical question: which generation order is most appropriately biased to model structural dependencies in English, as indicated by number agreement accuracy?",
"Our key finding is that the top-down generation outperforms left-corner and bottom-up variants for difficult cases with multiple attractors.",
"In theory, the three traversal strategies approximate the same chain rule that decompose the joint probability of words and phrase-structure trees, denoted as p(x, y), differently and as such will impose different biases on the learner.",
"In §4.3, we show that the three variants achieve similar perplexities on a held-out validation set.",
"As we observe different patterns in number agreement, this demonstrates that while perplexity can be a useful diagnostic tool, it may not be sensitive enough for comparing models in terms of how well they capture grammatical intuitions.",
"Number Agreement with LSTM Language Models We revisit the number agreement task with LSTMs trained on language modeling objectives, as proposed by Linzen et al.",
"(2016) .",
"Experimental Settings.",
"We use the same parsed Wikipedia corpus, verb inflectors, preprocessing steps, and dataset split as Linzen et al.",
"(2016) .",
"1 Word types beyond the most frequent 10,000 are converted to their respective POS tags.",
"We summarize the corpus statistics of the dataset, along with the test set distribution of the number of attractors, in Table 1 .",
"Similar to Linzen et al.",
"(2016) , we only include test cases where all intervening nouns are of the opposite number forms than the subject noun.",
"All models are implemented using the DyNet library (Neubig et al., 2017) .",
"Training was done using a language modeling objective that predicts the next word given the prefix; at test time we compute agreement error rates by comparing the probability of the correct verb form with the incorrect one.",
"We report performance of a few different LSTM hidden layer configurations, while other hyper-parameters are selected based on a grid search.",
"2 Following Linzen Table 2 indicates that, given enough capacity, LSTM language models without explicit syntactic supervision are able to perform well in number agreement.",
"For cases with multiple attractors, we observe that the LSTM language model with 50 hidden units trails behind its larger counterparts by a substantial margin despite comparable performance for zero attractor cases, suggesting that network capacity plays an especially important role in propagating relevant structural information across a large number of steps.",
"5 Our experiment independently derives the same finding as the recent work of Gulordava et al.",
"(2018) , who also find that LSTMs trained with language modeling objectives are able to learn number agreement well; here we additionally identify model capacity as one of the reasons for the discrepancy with the Linzen et al.",
"(2016) results.",
"While the pretrained large-scale language model of Jozefowicz et al.",
"(2016) has certain advantages in terms of model capacity, more training data, and richer vocabulary, we suspect that the poorer performance is due to differences between their training domain and the number agreement testing domain, although the model still performs reasonably well in the number agreement test set.",
"Prior work has confirmed the notion that, in many cases, statistical models are able to achieve good performance under some aggregate metric by overfitting to patterns that are predictive in most cases, often at the expense of more difficult, infrequent instances that require deeper language understanding abilities (Rimell et al., 2009; Jia and Liang, 2017) .",
"In the vast majority of cases, structural dependencies between subjects and verbs highly overlap with sequential dependencies (Table 1).",
"Nevertheless, the fact that number agreement accuracy gets worse as the number of attractors increases is consistent with a sequential recency bias in LSTMs: under this conjecture, identifying the correct structural dependency becomes harder when there are more adjacent nouns of different number forms than the true subject.",
"If the sequential recency conjecture is correct, then LSTMs would perform worse when the structural dependency is more distant in the sequences, compared to cases where the structural dependency is more adjacent.",
"We empirically test this conjecture by running a strong character-based LSTM language model of that achieved state of the art results on EnWiki8 from the Hutter Prize dataset (Hutter, 2012) , with 1,800 hidden units and 10 million parameters.",
"The character LSTM is trained, validated, and tested 6 on the same split of the Linzen et al.",
"(2016) number agreement dataset.",
"A priori, we expect that number agreement is harder for character LSTMs for two reasons.",
"First, character LSTMs lack explicit word representaforms much better for cases with multiple attractors.",
"tions, thus succeeding in this task requires identifying structural dependencies between two sequences of character tokens, while word-based LSTMs only need to resolve dependencies between word tokens.",
"Second, by nature of modeling characters, non-local structural dependencies are sequentially further apart than in the wordbased language model.",
"On the other hand, character LSTMs have the ability to exploit and share informative morphological cues, such as the fact that plural nouns in English tend to end with 's'.",
"As demonstrated on the last row of Table 2 , we find that the character LSTM language model performs much worse at number agreement with multiple attractors compared to its word-based counterparts.",
"This finding is consistent with that of Sennrich (2017) , who find that character-level decoders in neural machine translation perform worse than subword models in capturing morphosyntactic agreement.",
"To some extent, our finding demonstrates the limitations that character LSTMs face in learning structure from language modeling objectives, despite earlier evidence that character LSTM language models are able to implicitly acquire a lexicon (Le Godais et al., 2017) .",
"Number Agreement with RNNGs Given the strong performance of sequential LSTMs in number agreement, is there any further benefit to explicitly modeling hierarchical structures?",
"We focus on recurrent neural network grammars (Dyer et al., 2016, RNNGs) , which jointly model the probability of phrase-structure trees and strings, p(x, y), through structurebuilding actions and explicit compositions for representing completed constituents.",
"Our choice of RNNGs is motivated by the findings of Kuncoro et al.",
"(2017) , who find evidence for syntactic headedness in RNNG phrasal representations.",
"Intuitively, the ability to learn heads is beneficial for this task, as the representation for the noun phrase \"The flowers in the vase\" would be similar to the syntactic head flowers rather than vase.",
"In some sense, the composition operator can be understood as injecting a structural recency bias into the model design, as subjects and verbs that are sequentially apart are encouraged to be close together in the RNNGs' representation.",
"Recurrent Neural Network Grammars RNNGs (Dyer et al., 2016) are language models that estimate the joint probability of string terminals and phrase-structure tree nonterminals.",
"Here we use stack-only RNNGs that achieve better perplexity and parsing performance (Kuncoro et al., 2017) .",
"Given the current stack configuration, the objective function of RNNGs is to predict the correct structure-building operation according to a top-down, left-to-right traversal of the phrasestructure tree; a partial traversal for the input sentence \"The flowers in the vase are blooming\" is illustrated in Fig.",
"3(a) .",
"7 The structural inductive bias of RNNGs derives from an explicit composition operator that represents completed constituents; for instance, the constituent (NP The flowers) is represented by a single composite element on the stack, rather than as four separate symbols.",
"During each REDUCE action, the topmost stack elements that belong to the new constituent are popped from the stack and then composed by the composition function; the composed symbol is then pushed back into the stack.",
"The model is trained in an end-to-end manner by minimizing the cross-entropy loss relative to a sample of gold trees.",
"Experiments Here we summarize the experimental settings of running RNNGs on the number agreement dataset and discuss the empirical findings.",
"Experimental settings.",
"We obtain phrasestructure trees for the Linzen et al.",
"(2016) dataset using a publicly available discriminative model 8 trained on the Penn Treebank (Marcus et al., 1993) .",
"At training time, we use these predicted trees to derive action sequences on the training set, and train the RNNG model on these sequences.",
"9 At test time, we compare the probabilities of the correct and incorrect verb forms given the prefix, which now includes both nonterminal and terminal symbols.",
"An example of the stack contents (i.e.",
"the prefix) when predicting the verb is provided in Fig.",
"3(a) .",
"We similarly run a grid search over the same hyper-parameter range as the sequential LSTM and compare the results with the strongest sequential LSTM baseline from §2.",
"Figure 2 : Number agreement error rates for sequential LSTM language models (left), sequential syntactic LSTM language models (Choe and Charniak, 2016, center), and RNNGs (right).",
"Discussion.",
"Fig.",
"2 shows that RNNGs (rightmost) achieve much better number agreement accuracy compared to LSTM language models (leftmost) for difficult cases with four and five attractors, with around 30% error rate reductions, along with a 13% error rate reduction (from 9% to 7.8%) for three attractors.",
"We attribute the slightly worse performance of RNNGs on cases with zero and one attractor to the presence of intervening structure-building actions that separate the subject and the verb, such as a REDUCE (step 6 in Fig.",
"3(a) ) action to complete the noun phrase and at least one action to predict a verb phrase (step 15 in Fig.",
"3(a) ) before the verb itself is introduced, while LSTM language models benefit from shorter dependencies for zero and one attractor cases.",
"The performance gain of RNNGs might arise from two potential causes.",
"First, RNNGs have access to predicted syntactic annotations, while LSTM language models operate solely on word sequences.",
"Second, RNNGs incorporate explicit compositions, which encourage hierarhical representations and potentially the discovery of syntactic (rather than sequential) dependencies.",
"Would LSTMs that have access to syntactic annotations, but without the explicit composition function, benefit from the same performance gain as RNNGs?",
"To answer this question, we run sequential LSTMs over the same phrase-structure trees (Choe and Charniak, 2016) , similarly estimating the joint probability of phrase-structure nonterminals and string terminals but without an explicit composition operator.",
"Taking the example in Fig.",
"3(a) , the sequential syntactic LSTM would have fifteen 10 symbols on the LSTM when predicting the verb, as opposed to three symbols in the case of RNNGs' stack LSTM.",
"In theory, the sequential LSTM over the phrase-structure trees (Choe and Charniak, 2016) may be able to incorporate a similar, albeit implicit, composition process as RNNGs and consequently derive similarly syntactic heads, although there is no inductive bias that explicitly encourages such process.",
"Fig.",
"2 suggests that the sequential syntactic LSTMs (center) perform comparably with sequential LSTMs without syntax for multiple attractor cases, and worse than RNNGs for nearly all attractors; the gap is highest for multiple attractors.",
"This result showcases the importance of an explicit composition operator and hierarchical representations in identifying structural dependencies, as indicated by number agreement accuracy.",
"Our finding is consistent with the recent work of Yogatama et al.",
"(2018) , who find that introducing elements of hierarchical modeling through a stackstructured memory is beneficial for number agreement, outperforming LSTM language models and attention-augmented variants by increasing margins as the number of attractor grows.",
"Further Analysis In order to better interpret the results, we conduct further analysis into the perplexities of each model, followed by a discussion on the effect of incrementality constraints on the RNNG when predicting number agreement.",
"Perplexity.",
"To what extent does the success of RNNGs in the number agreement task with multiple attractors correlate with better performance under the perplexity metric?",
"We answer this question by using an importance sampling marginalization procedure (Dyer et al., 2016) to obtain an estimate of p(x) under both RNNGs and the sequential syntactic LSTM model.",
"Following Dyer et al.",
"(2016) , for each sentence on the validation set we sample 100 candidate trees from a discriminative model 11 as our proposal distribution.",
"As demonstrated in Table 3 , the LSTM language model has the lowest validation set perplexity despite substantially worse performance than RN-NGs in number agreement with multiple attractors, suggesting that lower perplexity is not neces-sarily correlated with number agreement success.",
"Incrementality constraints.",
"As the syntactic prefix was derived from a discriminative model that has access to unprocessed words, one potential concern is that this prefix might violate the incrementality constraints and benefit the RNNG over the LSTM language model.",
"To address this concern, we remark that the empirical evidence from Fig.",
"2 and Table 3 indicates that the LSTM language model without syntactic annotation outperforms the sequential LSTM with syntactic annotation in terms of both perplexity and number agreement throughout nearly all attractor settings, suggesting that the predicted syntactic prefix does not give any unfair advantage to the syntactic models.",
"Furthermore, we run an experiment where the syntactic prefix is instead derived from an incremental beam search procedure of Fried et al.",
"(2017) .",
"12 To this end, we take the highest scoring beam entry at the time that the verb is generated to be the syntactic prefix; this procedure is applied to both the correct and incorrect verb forms.",
"13 We then similarly compare the probabilities of the correct and incorrect verb form given each respective syntactic prefix to obtain number agreement accuracy.",
"Our finding suggests that using the fully incremental tree prefix leads to even better RNNG number agreement performance for four and five attractors, achieving 7.1% and 8.2% error rates, respectively, compared to 9.4% and 12% for the RNNG error rates in Fig.",
"2 .",
"Top-Down, Left-Corner, and Bottom-Up Traversals In this section, we propose two new variants of RNNGs that construct trees using a different con- 12 As the beam search procedure is time-consuming, we randomly sample 500 cases for each attractor and compute the number agreement accuracy on these samples.",
"13 Consequently, the correct and incorrect forms of the sentence might have different partial trees, as the highest scoring beam entries may be different for each alternative.",
"struction order than the top-down, left-to-right order used above.",
"These are a bottom-up construction order ( §4.1) and a left-corner construction order ( §4.2), analogous to the well-known parsing strategies (e.g.",
"Hale, 2014, chapter 3).",
"They differ from these classic strategies insofar as they do not announce the phrase-structural content of an entire branch at the same time, adopting instead a node-by-node enumeration reminescent of Markov Grammars (Charniak, 1997) .",
"This stepby-step arrangement extends to the derived string as well; since all variants generate words from left to right, the models can be compared using number agreement as a diagnostic.",
"14 Here we state our hypothesis on why the build order matters.",
"The three generation strategies represent different chain rule decompositions of the joint probability of strings and phrase-structure trees, thereby imposing different biases on the learner.",
"Earlier work in parsing has characterized the plausibility of top-down, left-corner, and bottom-up strategies as viable candidates of human sentence processing, especially in terms of memory constraints and human difficulties with center embedding constructions (Johnson-Laird, 1983; Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992 , inter alia), along with neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we cast the three strategies as models of language generation (Manning and Carpenter, 1997), and focus on the empirical question: which generation order has the most appropriate bias in modeling non-local structural dependencies in English?",
"These alternative orders organize the learning problem so as to yield intermediate states in generation that condition on different aspects of the grammatical structure.",
"In number agreement, this amounts to making an agreement controller, such as the word flowers in Fig.",
"3 , more or less salient.",
"If it is more salient, the model should be better-able to inflect the main verb in agreement with this controller, without getting distracted by the attractors.",
"The three proposed build orders are compared in Fig.",
"3 , showing the respective configurations (i.e.",
"the prefix) when generating the main verb in a sentence with a single attractor.",
"15 In ad-dition, we show concrete action sequences for a simpler sentence in each section.",
"Bottom-Up Traversal In bottom-up traversals, phrases are recursively constructed and labeled with the nonterminal type once all their daughters have been built, as illustrated in Fig.",
"4 .",
"Bottom-up traversals benefit from shorter stack depths compared to top-down due to the lack of incomplete nonterminals.",
"As the commitment to label the nonterminal type of a phrase is delayed until its constituents are complete, this means that the generation of a child node cannot condition on the label of its parent node.",
"In n-ary branching trees, bottom-up completion of constituents requires a procedure for determining how many of the most recent elements on the stack should be daughters of the node that is being constructed.",
"16 Conceptually, rather than having a single REDUCE operation as we have before, we have a complex REDUCE(X, n) operation that must determine the type of the constituent (i.e., X) as well as the number of daughters (i.e., n).",
"In step 5 of Fig.",
"4 , the newly formed NP constituent only covers the terminal worms, and neither the unattached terminal eats nor the constituent (NP The fox) is part of the new noun phrase.",
"We implement this extent decision using a stick-breaking construction-using the stack LSTM encoding, a single-layer feedforward network, and a logistic output layer-which decides whether the top element on the stack should be the leftmost child of the new constituent (i.e.",
"whether or not the new constituent is complete after popping the current topmost stack element), as illustrated in Fig.",
"5 .",
"If not, the process is then repeated after the topmost stack element is popped.",
"Once the extent of the new nonterminal has been decided, we parameterize the decision of the nonterminal label type; in Fig.",
"5 this is an NP.",
"A second difference to top-down generation is that when a single constituent remains on the stack, the sentence is not necessarily complete (see step 3 of Fig.",
"4 for examples where this happens).",
"We thus introduce an explicit STOP action (step 8, Fig.",
"4) , indicating the tree is complete, which is only assigned non-zero probability when the stack has a during the history of the full generation process vary considerably in the invariances and the kinds of actions they predict.",
"16 This mechanism is not necessary with strictly binary branching trees, since each new nonterminal always consists of the two children at the top of the stack.",
"Left-Corner Traversal Left-corner traversals combine some aspects of top-down and bottom-up processing.",
"As illustrated in Fig.",
"6 , this works by first generating the leftmost terminal of the tree, The (step 0), before proceeding bottom-up to predict its parent NP (step 1) and then top-down to predict the rest of its children (step 2).",
"A REDUCE action similarly calls the composition operator once the phrase is complete (e.g.",
"step 3).",
"The complete constituent (NP The fox) is the leftmost child of its parent node, thus an NT SW(S) action is done next (step 4).",
"The NT SW(X) action is similar to the NT(X) from the top-down generator, in that it introduces an open nonterminal node and must be matched later by a corresponding REDUCE operation, but, in addition, swaps the two topmost elements at the top of the stack.",
"This is necessary because the parent nonterminal node is not built until after its left-most child has been constructed.",
"In step 1 of Fig.",
"6 , with a single element The on the stack, the action NT SW(NP) adds the open nonterminal symbol NP to become the topmost stack element, but after applying the swap operator the stack now contains (NP | The (step 2).",
"Experiments We optimize the hyper-parameters of each RNNG variant using grid searches based on validation set perplexity.",
"Table 4 summarizes average stack depths and perplexities 17 on the Linzen et al.",
"(2016) validation set.",
"We evaluate each of the variants in terms of number agreement accuracy as an evidence of its suitability to model structural dependencies in English, presented in Table 5 .",
"To account for randomness in training, we report the error rate summary statistics of ten different runs.",
"Avg.",
"(±sdev)/min/max n=2 n=3 n=4 LM 5.8(±0.2)/5.5/6.0 9.6(±0.7)/8.8/10.1 14.1(±1.2)/13.0/15.3 TD 5.5(±0.4)/4.9/5.8 7.8(±0.6)/7.4/8.0 8.9(±1.1)/7.9/9.8 LC 5.4(±0.3)/5.2/5.5 8.2(±0.4)/7.9/8.7 9.9(±1.3)/8.8/11.5 BU 5.7(±0.3) 5.5/5.8 8.5(±0.7)/8.0/9.3 9.7(±1.1)/9.0/11.3 Table 5 : Number agreement error rates for topdown (TD), left-corner (LC), and bottom-up (BU) RNNGs, broken down by the number of attractors.",
"LM indicates the best sequential language model baseline ( §2).",
"We report the mean, standard deviation, and minimum/maximum of 10 different random seeds of each model.",
"Discussion.",
"In Table 5 , we focus on empirical results for cases where the structural dependencies matter the most, corresponding to cases with two, three, and four attractors.",
"All three RNNG variants outperform the sequential LSTM language model baseline for these cases.",
"Nevertheless, the top-down variant outperforms both left-corner and bottom-up strategies for difficult cases with three or more attractors, suggesting that the top-down strategy is most appropriately biased to model difficult number agreement dependencies in English.",
"We run an approximate randomization test by stratifying the output and permuting within each stratum (Yeh, 2000) and find that, for four attractors, the performance difference between the top-down RNNG and the other variants is statistically significant at p < 0.05.",
"The success of the top-down traversal in the domain of number-agreement prediction is consistent with a classical view in parsing that argues top-down parsing is the most human-like parsing strategy since it is the most anticipatory.",
"Only anticipatory representations, it is said, could explain the rapid, incremental processing that humans seem to exhibit (Marslen-Wilson, 1973; Tanenhaus et al., 1995) ; this line of thinking similarly motivates Charniak (2010) , among others.",
"While most work in this domain has been concerned with the parsing problem, our findings suggest that anticipatory mechanisms are also beneficial in capturing structural dependencies in language modeling.",
"We note that our results are achieved using models that, in theory, are able to condition on the entire derivation history, while earlier work in sentence processing has focused on cognitive memory considerations, such as the memory-bounded model of Schuler et al.",
"(2010) .",
"Conclusion Given enough capacity, LSTMs trained on language modeling objectives are able to learn syntax-sensitive dependencies, as evidenced by accurate number agreement accuracy with multiple attractors.",
"Despite this strong performance, we discover explicit modeling of structure does improve the model's ability to discover non-local structural dependencies when determining the distribution over subsequent word generation.",
"Recurrent neural network grammars (RNNGs), which jointly model phrase-structure trees and strings and employ an explicit composition operator, substantially outperform LSTM language models and syntactic language models without explicit compositions; this highlights the importance of a hierarchical inductive bias in capturing structural dependencies.",
"We explore the possibility that how the structure is built affects number agreement performance.",
"Through novel extensions to RNNGs that enable the use of left-corner and bottom-up generation strategies, we discover that this is indeed the case: the three RNNG variants have different generalization properties for number agreement, with the top-down traversal strategy performing best for cases with multiple attractors."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"4.3",
"5"
],
"paper_header_content": [
"Introduction",
"Number Agreement with LSTM Language Models",
"Number Agreement with RNNGs",
"Recurrent Neural Network Grammars",
"Experiments",
"Further Analysis",
"Top-Down, Left-Corner, and Bottom-Up Traversals",
"Bottom-Up Traversal",
"Left-Corner Traversal",
"Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-30#paper-1041#slide-32 | Summary Statistics | Near-identical perplexity for each variant
Bottom-up has the shortest stack depth
LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | Near-identical perplexity for each variant
Bottom-up has the shortest stack depth
LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | [] |
GEM-SciDuet-train-30#paper-1041#slide-33 | 1041 | LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modeling Structure Makes Them Better | Language exhibits hierarchical structure, but recent work using a subject-verb agreement diagnostic argued that state-ofthe-art language models, LSTMs, fail to learn long-range syntax-sensitive dependencies. Using the same diagnostic, we show that, in fact, LSTMs do succeed in learning such dependencies-provided they have enough capacity. We then explore whether models that have access to explicit syntactic information learn agreement more effectively, and how the way in which this structural information is incorporated into the model impacts performance. We find that the mere presence of syntactic information does not improve accuracy, but when model architecture is determined by syntax, number agreement is improved. Further, we find that the choice of how syntactic structure is built affects how well number agreement is learned: top-down construction outperforms leftcorner and bottom-up variants in capturing long-distance structural dependencies. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189
],
"paper_content_text": [
"Introduction Recurrent neural networks (RNNs) are remarkably effective models of sequential data.",
"Recent years have witnessed the widespread adoption of recurrent architectures such as LSTMs (Hochreiter and Schmidhuber, 1997) in various NLP tasks, with state of the art results in language modeling and conditional generation tasks like machine translation (Bahdanau et al., 2015) and text summarization (See et al., 2017) .",
"Here we revisit the question asked by Linzen et al.",
"(2016) : as RNNs model word sequences without explicit notions of hierarchical structure, Figure 1 : An example of the number agreement task with two attractors and a subject-verb distance of five.",
"to what extent are these models able to learn non-local syntactic dependencies in natural language?",
"Identifying number agreement between subjects and verbs-especially in the presence of attractors-can be understood as a cognitivelymotivated probe that seeks to distinguish hierarchical theories from sequential ones, as models that rely on sequential cues like the most recent noun would favor the incorrect verb form.",
"We provide an example of this task in Fig.",
"1 , where the plural form of the verb have agrees with the distant subject parts, rather than the adjacent attractors (underlined) of the singular form.",
"Contrary to the findings of Linzen et al.",
"(2016) , our experiments suggest that sequential LSTMs are able to capture structural dependencies to a large extent, even for cases with multiple attractors ( §2).",
"Our finding suggests that network capacity plays a crucial role in capturing structural dependencies with multiple attractors.",
"Nevertheless, we find that a strong character LSTM language model-which lacks explicit word representation and has to capture much longer sequential dependencies in order to learn non-local structural dependencies effectively-performs much worse in the number agreement task.",
"Given the strong performance of word-based LSTM language models, are there are any substantial benefits, in terms of number agreement accuracy, to explicitly modeling hierarchical structures as an inductive bias?",
"We discover that a certain class of LSTM language models that explicitly models syntactic structures, the recurrent neural network grammars (Dyer et al., 2016, RN-NGs) , considerably outperforms sequential LSTM language models for cases with multiple attractors ( §3).",
"We present experiments affirming that this gain is due to an explicit composition operator rather than the presence of predicted syntactic annotations.",
"Rather surprisingly, syntactic LSTM language models without explicit composition have no advantage over sequential LSTMs that operate on word sequences, although these models can nevertheless be excellent predictors of phrase structures (Choe and Charniak, 2016) .",
"Having established the importance of modeling structures, we explore the hypothesis that how we build the structure affects the model's ability to identify structural dependencies in English.",
"As RNNGs build phrase-structure trees through top-down operations, we propose extensions to the structure-building sequences and model architecture that enable left-corner (Henderson, 2003 (Henderson, , 2004 and bottom-up (Chelba and Jelinek, 2000; Emami and Jelinek, 2005) generation orders ( §4).",
"Extensive prior work has characterized topdown, left-corner, and bottom-up parsing strategies in terms of cognitive plausibility (Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992) and neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we move away from the realm of parsing and evaluate the three strategies as models of generation instead, and address the following empirical question: which generation order is most appropriately biased to model structural dependencies in English, as indicated by number agreement accuracy?",
"Our key finding is that the top-down generation outperforms left-corner and bottom-up variants for difficult cases with multiple attractors.",
"In theory, the three traversal strategies approximate the same chain rule that decompose the joint probability of words and phrase-structure trees, denoted as p(x, y), differently and as such will impose different biases on the learner.",
"In §4.3, we show that the three variants achieve similar perplexities on a held-out validation set.",
"As we observe different patterns in number agreement, this demonstrates that while perplexity can be a useful diagnostic tool, it may not be sensitive enough for comparing models in terms of how well they capture grammatical intuitions.",
"Number Agreement with LSTM Language Models We revisit the number agreement task with LSTMs trained on language modeling objectives, as proposed by Linzen et al.",
"(2016) .",
"Experimental Settings.",
"We use the same parsed Wikipedia corpus, verb inflectors, preprocessing steps, and dataset split as Linzen et al.",
"(2016) .",
"1 Word types beyond the most frequent 10,000 are converted to their respective POS tags.",
"We summarize the corpus statistics of the dataset, along with the test set distribution of the number of attractors, in Table 1 .",
"Similar to Linzen et al.",
"(2016) , we only include test cases where all intervening nouns are of the opposite number forms than the subject noun.",
"All models are implemented using the DyNet library (Neubig et al., 2017) .",
"Training was done using a language modeling objective that predicts the next word given the prefix; at test time we compute agreement error rates by comparing the probability of the correct verb form with the incorrect one.",
"We report performance of a few different LSTM hidden layer configurations, while other hyper-parameters are selected based on a grid search.",
"2 Following Linzen Table 2 indicates that, given enough capacity, LSTM language models without explicit syntactic supervision are able to perform well in number agreement.",
"For cases with multiple attractors, we observe that the LSTM language model with 50 hidden units trails behind its larger counterparts by a substantial margin despite comparable performance for zero attractor cases, suggesting that network capacity plays an especially important role in propagating relevant structural information across a large number of steps.",
"5 Our experiment independently derives the same finding as the recent work of Gulordava et al.",
"(2018) , who also find that LSTMs trained with language modeling objectives are able to learn number agreement well; here we additionally identify model capacity as one of the reasons for the discrepancy with the Linzen et al.",
"(2016) results.",
"While the pretrained large-scale language model of Jozefowicz et al.",
"(2016) has certain advantages in terms of model capacity, more training data, and richer vocabulary, we suspect that the poorer performance is due to differences between their training domain and the number agreement testing domain, although the model still performs reasonably well in the number agreement test set.",
"Prior work has confirmed the notion that, in many cases, statistical models are able to achieve good performance under some aggregate metric by overfitting to patterns that are predictive in most cases, often at the expense of more difficult, infrequent instances that require deeper language understanding abilities (Rimell et al., 2009; Jia and Liang, 2017) .",
"In the vast majority of cases, structural dependencies between subjects and verbs highly overlap with sequential dependencies (Table 1).",
"Nevertheless, the fact that number agreement accuracy gets worse as the number of attractors increases is consistent with a sequential recency bias in LSTMs: under this conjecture, identifying the correct structural dependency becomes harder when there are more adjacent nouns of different number forms than the true subject.",
"If the sequential recency conjecture is correct, then LSTMs would perform worse when the structural dependency is more distant in the sequences, compared to cases where the structural dependency is more adjacent.",
"We empirically test this conjecture by running a strong character-based LSTM language model of that achieved state of the art results on EnWiki8 from the Hutter Prize dataset (Hutter, 2012) , with 1,800 hidden units and 10 million parameters.",
"The character LSTM is trained, validated, and tested 6 on the same split of the Linzen et al.",
"(2016) number agreement dataset.",
"A priori, we expect that number agreement is harder for character LSTMs for two reasons.",
"First, character LSTMs lack explicit word representaforms much better for cases with multiple attractors.",
"tions, thus succeeding in this task requires identifying structural dependencies between two sequences of character tokens, while word-based LSTMs only need to resolve dependencies between word tokens.",
"Second, by nature of modeling characters, non-local structural dependencies are sequentially further apart than in the wordbased language model.",
"On the other hand, character LSTMs have the ability to exploit and share informative morphological cues, such as the fact that plural nouns in English tend to end with 's'.",
"As demonstrated on the last row of Table 2 , we find that the character LSTM language model performs much worse at number agreement with multiple attractors compared to its word-based counterparts.",
"This finding is consistent with that of Sennrich (2017) , who find that character-level decoders in neural machine translation perform worse than subword models in capturing morphosyntactic agreement.",
"To some extent, our finding demonstrates the limitations that character LSTMs face in learning structure from language modeling objectives, despite earlier evidence that character LSTM language models are able to implicitly acquire a lexicon (Le Godais et al., 2017) .",
"Number Agreement with RNNGs Given the strong performance of sequential LSTMs in number agreement, is there any further benefit to explicitly modeling hierarchical structures?",
"We focus on recurrent neural network grammars (Dyer et al., 2016, RNNGs) , which jointly model the probability of phrase-structure trees and strings, p(x, y), through structurebuilding actions and explicit compositions for representing completed constituents.",
"Our choice of RNNGs is motivated by the findings of Kuncoro et al.",
"(2017) , who find evidence for syntactic headedness in RNNG phrasal representations.",
"Intuitively, the ability to learn heads is beneficial for this task, as the representation for the noun phrase \"The flowers in the vase\" would be similar to the syntactic head flowers rather than vase.",
"In some sense, the composition operator can be understood as injecting a structural recency bias into the model design, as subjects and verbs that are sequentially apart are encouraged to be close together in the RNNGs' representation.",
"Recurrent Neural Network Grammars RNNGs (Dyer et al., 2016) are language models that estimate the joint probability of string terminals and phrase-structure tree nonterminals.",
"Here we use stack-only RNNGs that achieve better perplexity and parsing performance (Kuncoro et al., 2017) .",
"Given the current stack configuration, the objective function of RNNGs is to predict the correct structure-building operation according to a top-down, left-to-right traversal of the phrasestructure tree; a partial traversal for the input sentence \"The flowers in the vase are blooming\" is illustrated in Fig.",
"3(a) .",
"7 The structural inductive bias of RNNGs derives from an explicit composition operator that represents completed constituents; for instance, the constituent (NP The flowers) is represented by a single composite element on the stack, rather than as four separate symbols.",
"During each REDUCE action, the topmost stack elements that belong to the new constituent are popped from the stack and then composed by the composition function; the composed symbol is then pushed back into the stack.",
"The model is trained in an end-to-end manner by minimizing the cross-entropy loss relative to a sample of gold trees.",
"Experiments Here we summarize the experimental settings of running RNNGs on the number agreement dataset and discuss the empirical findings.",
"Experimental settings.",
"We obtain phrasestructure trees for the Linzen et al.",
"(2016) dataset using a publicly available discriminative model 8 trained on the Penn Treebank (Marcus et al., 1993) .",
"At training time, we use these predicted trees to derive action sequences on the training set, and train the RNNG model on these sequences.",
"9 At test time, we compare the probabilities of the correct and incorrect verb forms given the prefix, which now includes both nonterminal and terminal symbols.",
"An example of the stack contents (i.e.",
"the prefix) when predicting the verb is provided in Fig.",
"3(a) .",
"We similarly run a grid search over the same hyper-parameter range as the sequential LSTM and compare the results with the strongest sequential LSTM baseline from §2.",
"Figure 2 : Number agreement error rates for sequential LSTM language models (left), sequential syntactic LSTM language models (Choe and Charniak, 2016, center), and RNNGs (right).",
"Discussion.",
"Fig.",
"2 shows that RNNGs (rightmost) achieve much better number agreement accuracy compared to LSTM language models (leftmost) for difficult cases with four and five attractors, with around 30% error rate reductions, along with a 13% error rate reduction (from 9% to 7.8%) for three attractors.",
"We attribute the slightly worse performance of RNNGs on cases with zero and one attractor to the presence of intervening structure-building actions that separate the subject and the verb, such as a REDUCE (step 6 in Fig.",
"3(a) ) action to complete the noun phrase and at least one action to predict a verb phrase (step 15 in Fig.",
"3(a) ) before the verb itself is introduced, while LSTM language models benefit from shorter dependencies for zero and one attractor cases.",
"The performance gain of RNNGs might arise from two potential causes.",
"First, RNNGs have access to predicted syntactic annotations, while LSTM language models operate solely on word sequences.",
"Second, RNNGs incorporate explicit compositions, which encourage hierarhical representations and potentially the discovery of syntactic (rather than sequential) dependencies.",
"Would LSTMs that have access to syntactic annotations, but without the explicit composition function, benefit from the same performance gain as RNNGs?",
"To answer this question, we run sequential LSTMs over the same phrase-structure trees (Choe and Charniak, 2016) , similarly estimating the joint probability of phrase-structure nonterminals and string terminals but without an explicit composition operator.",
"Taking the example in Fig.",
"3(a) , the sequential syntactic LSTM would have fifteen 10 symbols on the LSTM when predicting the verb, as opposed to three symbols in the case of RNNGs' stack LSTM.",
"In theory, the sequential LSTM over the phrase-structure trees (Choe and Charniak, 2016) may be able to incorporate a similar, albeit implicit, composition process as RNNGs and consequently derive similarly syntactic heads, although there is no inductive bias that explicitly encourages such process.",
"Fig.",
"2 suggests that the sequential syntactic LSTMs (center) perform comparably with sequential LSTMs without syntax for multiple attractor cases, and worse than RNNGs for nearly all attractors; the gap is highest for multiple attractors.",
"This result showcases the importance of an explicit composition operator and hierarchical representations in identifying structural dependencies, as indicated by number agreement accuracy.",
"Our finding is consistent with the recent work of Yogatama et al.",
"(2018) , who find that introducing elements of hierarchical modeling through a stackstructured memory is beneficial for number agreement, outperforming LSTM language models and attention-augmented variants by increasing margins as the number of attractor grows.",
"Further Analysis In order to better interpret the results, we conduct further analysis into the perplexities of each model, followed by a discussion on the effect of incrementality constraints on the RNNG when predicting number agreement.",
"Perplexity.",
"To what extent does the success of RNNGs in the number agreement task with multiple attractors correlate with better performance under the perplexity metric?",
"We answer this question by using an importance sampling marginalization procedure (Dyer et al., 2016) to obtain an estimate of p(x) under both RNNGs and the sequential syntactic LSTM model.",
"Following Dyer et al.",
"(2016) , for each sentence on the validation set we sample 100 candidate trees from a discriminative model 11 as our proposal distribution.",
"As demonstrated in Table 3 , the LSTM language model has the lowest validation set perplexity despite substantially worse performance than RN-NGs in number agreement with multiple attractors, suggesting that lower perplexity is not neces-sarily correlated with number agreement success.",
"Incrementality constraints.",
"As the syntactic prefix was derived from a discriminative model that has access to unprocessed words, one potential concern is that this prefix might violate the incrementality constraints and benefit the RNNG over the LSTM language model.",
"To address this concern, we remark that the empirical evidence from Fig.",
"2 and Table 3 indicates that the LSTM language model without syntactic annotation outperforms the sequential LSTM with syntactic annotation in terms of both perplexity and number agreement throughout nearly all attractor settings, suggesting that the predicted syntactic prefix does not give any unfair advantage to the syntactic models.",
"Furthermore, we run an experiment where the syntactic prefix is instead derived from an incremental beam search procedure of Fried et al.",
"(2017) .",
"12 To this end, we take the highest scoring beam entry at the time that the verb is generated to be the syntactic prefix; this procedure is applied to both the correct and incorrect verb forms.",
"13 We then similarly compare the probabilities of the correct and incorrect verb form given each respective syntactic prefix to obtain number agreement accuracy.",
"Our finding suggests that using the fully incremental tree prefix leads to even better RNNG number agreement performance for four and five attractors, achieving 7.1% and 8.2% error rates, respectively, compared to 9.4% and 12% for the RNNG error rates in Fig.",
"2 .",
"Top-Down, Left-Corner, and Bottom-Up Traversals In this section, we propose two new variants of RNNGs that construct trees using a different con- 12 As the beam search procedure is time-consuming, we randomly sample 500 cases for each attractor and compute the number agreement accuracy on these samples.",
"13 Consequently, the correct and incorrect forms of the sentence might have different partial trees, as the highest scoring beam entries may be different for each alternative.",
"struction order than the top-down, left-to-right order used above.",
"These are a bottom-up construction order ( §4.1) and a left-corner construction order ( §4.2), analogous to the well-known parsing strategies (e.g.",
"Hale, 2014, chapter 3).",
"They differ from these classic strategies insofar as they do not announce the phrase-structural content of an entire branch at the same time, adopting instead a node-by-node enumeration reminescent of Markov Grammars (Charniak, 1997) .",
"This stepby-step arrangement extends to the derived string as well; since all variants generate words from left to right, the models can be compared using number agreement as a diagnostic.",
"14 Here we state our hypothesis on why the build order matters.",
"The three generation strategies represent different chain rule decompositions of the joint probability of strings and phrase-structure trees, thereby imposing different biases on the learner.",
"Earlier work in parsing has characterized the plausibility of top-down, left-corner, and bottom-up strategies as viable candidates of human sentence processing, especially in terms of memory constraints and human difficulties with center embedding constructions (Johnson-Laird, 1983; Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992 , inter alia), along with neurophysiological evidence in human sentence processing (Nelson et al., 2017) .",
"Here we cast the three strategies as models of language generation (Manning and Carpenter, 1997), and focus on the empirical question: which generation order has the most appropriate bias in modeling non-local structural dependencies in English?",
"These alternative orders organize the learning problem so as to yield intermediate states in generation that condition on different aspects of the grammatical structure.",
"In number agreement, this amounts to making an agreement controller, such as the word flowers in Fig.",
"3 , more or less salient.",
"If it is more salient, the model should be better-able to inflect the main verb in agreement with this controller, without getting distracted by the attractors.",
"The three proposed build orders are compared in Fig.",
"3 , showing the respective configurations (i.e.",
"the prefix) when generating the main verb in a sentence with a single attractor.",
"15 In ad-dition, we show concrete action sequences for a simpler sentence in each section.",
"Bottom-Up Traversal In bottom-up traversals, phrases are recursively constructed and labeled with the nonterminal type once all their daughters have been built, as illustrated in Fig.",
"4 .",
"Bottom-up traversals benefit from shorter stack depths compared to top-down due to the lack of incomplete nonterminals.",
"As the commitment to label the nonterminal type of a phrase is delayed until its constituents are complete, this means that the generation of a child node cannot condition on the label of its parent node.",
"In n-ary branching trees, bottom-up completion of constituents requires a procedure for determining how many of the most recent elements on the stack should be daughters of the node that is being constructed.",
"16 Conceptually, rather than having a single REDUCE operation as we have before, we have a complex REDUCE(X, n) operation that must determine the type of the constituent (i.e., X) as well as the number of daughters (i.e., n).",
"In step 5 of Fig.",
"4 , the newly formed NP constituent only covers the terminal worms, and neither the unattached terminal eats nor the constituent (NP The fox) is part of the new noun phrase.",
"We implement this extent decision using a stick-breaking construction-using the stack LSTM encoding, a single-layer feedforward network, and a logistic output layer-which decides whether the top element on the stack should be the leftmost child of the new constituent (i.e.",
"whether or not the new constituent is complete after popping the current topmost stack element), as illustrated in Fig.",
"5 .",
"If not, the process is then repeated after the topmost stack element is popped.",
"Once the extent of the new nonterminal has been decided, we parameterize the decision of the nonterminal label type; in Fig.",
"5 this is an NP.",
"A second difference to top-down generation is that when a single constituent remains on the stack, the sentence is not necessarily complete (see step 3 of Fig.",
"4 for examples where this happens).",
"We thus introduce an explicit STOP action (step 8, Fig.",
"4) , indicating the tree is complete, which is only assigned non-zero probability when the stack has a during the history of the full generation process vary considerably in the invariances and the kinds of actions they predict.",
"16 This mechanism is not necessary with strictly binary branching trees, since each new nonterminal always consists of the two children at the top of the stack.",
"Left-Corner Traversal Left-corner traversals combine some aspects of top-down and bottom-up processing.",
"As illustrated in Fig.",
"6 , this works by first generating the leftmost terminal of the tree, The (step 0), before proceeding bottom-up to predict its parent NP (step 1) and then top-down to predict the rest of its children (step 2).",
"A REDUCE action similarly calls the composition operator once the phrase is complete (e.g.",
"step 3).",
"The complete constituent (NP The fox) is the leftmost child of its parent node, thus an NT SW(S) action is done next (step 4).",
"The NT SW(X) action is similar to the NT(X) from the top-down generator, in that it introduces an open nonterminal node and must be matched later by a corresponding REDUCE operation, but, in addition, swaps the two topmost elements at the top of the stack.",
"This is necessary because the parent nonterminal node is not built until after its left-most child has been constructed.",
"In step 1 of Fig.",
"6 , with a single element The on the stack, the action NT SW(NP) adds the open nonterminal symbol NP to become the topmost stack element, but after applying the swap operator the stack now contains (NP | The (step 2).",
"Experiments We optimize the hyper-parameters of each RNNG variant using grid searches based on validation set perplexity.",
"Table 4 summarizes average stack depths and perplexities 17 on the Linzen et al.",
"(2016) validation set.",
"We evaluate each of the variants in terms of number agreement accuracy as an evidence of its suitability to model structural dependencies in English, presented in Table 5 .",
"To account for randomness in training, we report the error rate summary statistics of ten different runs.",
"Avg.",
"(±sdev)/min/max n=2 n=3 n=4 LM 5.8(±0.2)/5.5/6.0 9.6(±0.7)/8.8/10.1 14.1(±1.2)/13.0/15.3 TD 5.5(±0.4)/4.9/5.8 7.8(±0.6)/7.4/8.0 8.9(±1.1)/7.9/9.8 LC 5.4(±0.3)/5.2/5.5 8.2(±0.4)/7.9/8.7 9.9(±1.3)/8.8/11.5 BU 5.7(±0.3) 5.5/5.8 8.5(±0.7)/8.0/9.3 9.7(±1.1)/9.0/11.3 Table 5 : Number agreement error rates for topdown (TD), left-corner (LC), and bottom-up (BU) RNNGs, broken down by the number of attractors.",
"LM indicates the best sequential language model baseline ( §2).",
"We report the mean, standard deviation, and minimum/maximum of 10 different random seeds of each model.",
"Discussion.",
"In Table 5 , we focus on empirical results for cases where the structural dependencies matter the most, corresponding to cases with two, three, and four attractors.",
"All three RNNG variants outperform the sequential LSTM language model baseline for these cases.",
"Nevertheless, the top-down variant outperforms both left-corner and bottom-up strategies for difficult cases with three or more attractors, suggesting that the top-down strategy is most appropriately biased to model difficult number agreement dependencies in English.",
"We run an approximate randomization test by stratifying the output and permuting within each stratum (Yeh, 2000) and find that, for four attractors, the performance difference between the top-down RNNG and the other variants is statistically significant at p < 0.05.",
"The success of the top-down traversal in the domain of number-agreement prediction is consistent with a classical view in parsing that argues top-down parsing is the most human-like parsing strategy since it is the most anticipatory.",
"Only anticipatory representations, it is said, could explain the rapid, incremental processing that humans seem to exhibit (Marslen-Wilson, 1973; Tanenhaus et al., 1995) ; this line of thinking similarly motivates Charniak (2010) , among others.",
"While most work in this domain has been concerned with the parsing problem, our findings suggest that anticipatory mechanisms are also beneficial in capturing structural dependencies in language modeling.",
"We note that our results are achieved using models that, in theory, are able to condition on the entire derivation history, while earlier work in sentence processing has focused on cognitive memory considerations, such as the memory-bounded model of Schuler et al.",
"(2010) .",
"Conclusion Given enough capacity, LSTMs trained on language modeling objectives are able to learn syntax-sensitive dependencies, as evidenced by accurate number agreement accuracy with multiple attractors.",
"Despite this strong performance, we discover explicit modeling of structure does improve the model's ability to discover non-local structural dependencies when determining the distribution over subsequent word generation.",
"Recurrent neural network grammars (RNNGs), which jointly model phrase-structure trees and strings and employ an explicit composition operator, substantially outperform LSTM language models and syntactic language models without explicit compositions; this highlights the importance of a hierarchical inductive bias in capturing structural dependencies.",
"We explore the possibility that how the structure is built affects number agreement performance.",
"Through novel extensions to RNNGs that enable the use of left-corner and bottom-up generation strategies, we discover that this is indeed the case: the three RNNG variants have different generalization properties for number agreement, with the top-down traversal strategy performing best for cases with multiple attractors."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"4.1",
"4.2",
"4.3",
"5"
],
"paper_header_content": [
"Introduction",
"Number Agreement with LSTM Language Models",
"Number Agreement with RNNGs",
"Recurrent Neural Network Grammars",
"Experiments",
"Further Analysis",
"Top-Down, Left-Corner, and Bottom-Up Traversals",
"Bottom-Up Traversal",
"Left-Corner Traversal",
"Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-30#paper-1041#slide-33 | Different Traversal Number Agreement Error Rates | Top-down performs best for n=3 and n=4
Bottom-Up For n=4 this is
LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | Top-down performs best for n=3 and n=4
Bottom-Up For n=4 this is
LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018) | [] |