input
stringlengths 286
19k
| output
stringlengths 1
15.8k
| metadata
dict | _instance_id
stringlengths 15
62
|
---|---|---|---|
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
related work
Context before the citation:
Andrews et al. (2009) furthered this work by showing that a bimodal topic model, consisting of both text and feature norms, outperformed models using only one modality on the prediction of association norms, word substitution errors, and semantic interference tasks. Griffiths et al. (2007) helped pave the path for cognitive-linguistic multimodal research, showing that Latent Dirichlet Allocation outperformed Latent Semantic Analysis (Deerwester et al., 1990) in the prediction of association norms. cue word and name the first (or several) associated words that come to mind (e.g., Nelson et al. (2004)), and feature norms, where subjects are given a cue word and asked to describe typical properties of the cue concept (e.g., McRae et al. (2005)).
Citation Sentence:
Andrews et al. ( 2009 ) furthered this work by showing that a bimodal topic model , consisting of both text and feature norms , outperformed models using only one modality on the prediction of association norms , word substitution errors , and semantic interference tasks .
Context after the citation:
In a similar vein, Steyvers (2010) showed that a different feature-topic model improved predictions on a fill-in-the-blank task. Johns and Jones (2012) take an entirely different approach by showing that one can successfully infer held out feature norms from weighted mixtures based on textual similarity. Silberer and Lapata (2012) introduce a new method of multimodal integration based on Canonical Correlation Analysis, and performs a systematic comparison between their CCA-based model and others on association norm prediction, held out feature prediction, and word similarity. As computer vision techniques have improved over the past decade, other research has begun directly using visual information in place of feature norms. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1448 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
experiments
Context before the citation:
A cooccurrence based stemmer (Xu and Croft, 1998) was used to stem Spanish words. Each English word has around 1.5 translations on average. For Spanish, we downloaded a bilingual English-Spanish lexicon from the Internet (http://www.activa.arralcis.es) containing around 22,000 English words (16,000 English stems) and processed it similarly.
Citation Sentence:
A cooccurrence based stemmer ( Xu and Croft , 1998 ) was used to stem Spanish words .
Context after the citation:
One difference from the treatment of Chinese is to include the English word as one of its own translations in addition to its Spanish translations in the lexicon. This is useful for translating proper nouns, which often have identical spellings in English and Spanish but are routinely excluded from a lexicon. One problem is the segmentation of Chinese text, since Chinese has no spaces between words. In these initial experiments, we relied on a simple sub-string matching algorithm to extract words from Chinese text. | Uses | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1449 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
introduction
Context before the citation:
Over the past decade, researchers at IBM have developed a series of increasingly sophisticated statistical models for machine translation (Brown et al., 1988; Brown et al., 1990; Brown et al., 1993a).
Citation Sentence:
Over the past decade , researchers at IBM have developed a series of increasingly sophisticated statistical models for machine translation ( Brown et al. , 1988 ; Brown et al. , 1990 ; Brown et al. , 1993a ) .
Context after the citation:
However, the IBM models, which attempt to capture a broad range of translation phenomena, are computationally expensive to apply. Table look-up using an explicit translation lexicon is sufficient and preferable for many multilingual NLP applications, including "crummy" MT on the World Wide Web (Church & Hovy, 1993), certain machine-assisted translation tools (e.g. (Macklovitch, 1994; Melamed, 1996b)), concordancing for bilingual lexicography (Catizone et al., 1993; Gale & Church, 1991), computerassisted language learning, corpus linguistics (Melby. 1981), and cross-lingual information retrieval (Oard & Dorr, 1996). In this paper, we present a fast method for inducing accurate translation lexicons. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:145 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
None
Context before the citation:
The best performance on the WSJ corpus was achieved by a combination of the SATZ system (Palmer and Hearst 1997) with the Alembic system (Aberdeen et al. 1995): a 0.5% error rate. State-of-theart machine learning and rule-based SBD systems achieve an error rate of 0.8â1.5% measured on the Brown corpus and the WSJ corpus. Row C of Table 4 summarizes the highest results known to us (for all three tasks) produced by automatic systems on the Brown corpus and the WSJ corpus.
Citation Sentence:
The best performance on the WSJ corpus was achieved by a combination of the SATZ system ( Palmer and Hearst 1997 ) with the Alembic system ( Aberdeen et al. 1995 ) : a 0.5 % error rate .
Context after the citation:
The best performance on the Brown corpus, a 0.2% error rate, was reported by Riley (1989), who trained a decision tree classifier on a 25-million-word corpus. In the disambiguation of capitalized words, the most widespread method is POS tagging, which achieves about a 3% error rate on the Brown corpus and a 5% error rate on the WSJ corpus, as reported in Mikheev (2000). We are not aware of any studies devoted to the identification of abbreviations with comprehensive evaluation on either the Brown corpus or the WSJ corpus. In row D of Table 4, we summarized our main results: the results obtained by the application of our SBD rule set, which uses the information provided by the DCA to capitalized word disambiguation applied together with lexical lookup (as described in Section 7.5), and the abbreviation-handling strategy, which included the guessing heuristics, the DCA, and the list of 270 abbreviations (as described in Section 6). | CompareOrContrast | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1450 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
related work
Context before the citation:
In a number of proposals, lexical generalizations are captured using lexical underspecification (Kathol 1994; Krieger and Nerbonne 1992; Lexical rules have not gone unchallenged as a mechanism for expressing generalizations over lexical information.
Citation Sentence:
In a number of proposals , lexical generalizations are captured using lexical underspecification ( Kathol 1994 ; Krieger and Nerbonne 1992 ;
Context after the citation:
Riehemann 1993; Oliva 1994; Frank 1994; Opalka 1995; Sanfilippo 1995). The lexical entries are only partially specified, and various specializations are encoded via the type hierarchy, definite clause attachments, or a macro hierarchy. These approaches seem to propose a completely different way to capture lexical generalizations. It is therefore interesting that the covariation lexical rule compiler produces a lexicon encoding that, basically, uses an underspecification representation: The resulting definite clause representation after constraint propagation represents the common information in the base lexical entry, and uses a definite clause attachment to encode the different specializations. | CompareOrContrast | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1451 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
experiments
Context before the citation:
Although a grid may be more descriptively suitable for some aspects of prosody (for example, Sproat and Liberman (1987) use the grid representation for their implementation of stress assignment in compound nominals), we are not aware of any evidence for or against a grid representation of discourseneutral phrasing. An alternative representation based on Liberman and Prince (1977) is presented in Selkirk (1984), which contends that prosody, including prosodic phrasing, is more properly represented as a grid instead of a tree. Following G&G, we require that the prosody rules build a binary tree whose terminals are phonological words and whose node labels are indices that mark boundary salience.
Citation Sentence:
Although a grid may be more descriptively suitable for some aspects of prosody ( for example , Sproat and Liberman ( 1987 ) use the grid representation for their implementation of stress assignment in compound nominals ) , we are not aware of any evidence for or against a grid representation of discourseneutral phrasing .
Context after the citation:
Figure 8 shows the phonological phrase tree that is built from the syntactic structure of Figure 7. The rules for building this tree apply from left to right, following the analysis we described in the preceding section. Figures 9-11. show the prosodic phrase derivation. | CompareOrContrast | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1452 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
experiments
Context before the citation:
The bottom panel of table 1 lists the results for the chosen lexicalized model (SSN-Freq>200) and five recent statistical parsers (Ratnaparkhi, 1999; Collins, 1999; Charniak, 2000; Collins, 2000; Bod, 2001). The Tags model also does much better than the only other broad coverage neural network parser (Costa et al., 2001). The Tags model achieves performance which is better than any previously published results on parsing with a non-lexicalized model.
Citation Sentence:
The bottom panel of table 1 lists the results for the chosen lexicalized model ( SSN-Freq > 200 ) and five recent statistical parsers ( Ratnaparkhi , 1999 ; Collins , 1999 ; Charniak , 2000 ; Collins , 2000 ; Bod , 2001 ) .
Context after the citation:
The performance of the lexicalized model falls in the middle of this range, only being beaten by the three best current parsers, which all achieve equivalent performance. The best current model (Collins, 2000) has only 6% less precision error and only 11% less recall error than the lexicalized model. The SSN parser achieves this result using much less lexical knowledge than other approaches, which all minimally use the words which occur at least 5 times, plus morphological features of the remaining words. It is also achieved without any explicit notion of lexical head. | CompareOrContrast | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1453 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
introduction
Context before the citation:
Other factors, such as the role of focus (Grosz 1977, 1978; Sidner 1983) or quantifier scoping (Webber 1983) must play a role, too. They are intended as an illustration of the power of abduction, which in this framework helps determine the universe of the model (that is the set of entities that appear in it). We have no doubts that various other metarules will be necessary; clearly, our two metarules cannot constitute the whole theory of anaphora resolution.
Citation Sentence:
Other factors , such as the role of focus ( Grosz 1977 , 1978 ; Sidner 1983 ) or quantifier scoping ( Webber 1983 ) must play a role , too .
Context after the citation:
Determining the relative importance of those factors, the above metarules, and syntactic clues, appears to be an interesting topic in itself. Note: In our translation from English to logic we are assuming that "it" is anaphoric (with the pronoun following the element that it refers to), not cataphoric (the other way around). This means that the "it" that brought the disease in P1 will not be considered to refer to the infection "i" or the death "d" in P3. This strategy is certainly the right one to start out with, since anaphora is always the more typical direction of reference in English prose (Halliday and Hasan 1976, p. 329). | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1454 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
introduction
Context before the citation:
linguistic in nature, rather than dealing with superficial properties of the text, e.g. the amount of white space between words (Por et al., 2008). 2The message may have been encrypted initially also, as in the figure, but this is not important in this paper; the key point is that the hidden message is a sequence of bits. Note that we are concerned with transformations which are
Citation Sentence:
linguistic in nature , rather than dealing with superficial properties of the text , e.g. the amount of white space between words ( Por et al. , 2008 ) .
Context after the citation:
Our proposed method is based on the automatically acquired paraphrase dictionary described in Callison-Burch (2008), in which the application of paraphrases from the dictionary encodes secret bits. One advantage of the dictionary is that it has wide coverage, being automatically extracted; however, a disadvantage is that it contains many paraphrases which are either inappropriate, or only appropriate in certain contexts. Since we require any changes to be imperceptible to a human observer, it is crucial to our system that any uses of paraphrasing are grammatical and retain the meaning of the original cover text. In order to test the grammaticality and meaning preserving nature of a paraphrase, we employ a simple technique based on checking whether the contexts containing the paraphrase are in the Google ngram corpus. | CompareOrContrast | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1455 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
None
Context before the citation:
The second version (RM) concerns the Resource Management task (Pallett 1989) that has been popular within the DARPA community in recent years. The first version (TIMIT) was developed for the 450 phonetically rich sentences of the TIMIT database (Lamel et al. 1986). To date, four distinct domain-specific versions of TINA have been implemented.
Citation Sentence:
The second version ( RM ) concerns the Resource Management task ( Pallett 1989 ) that has been popular within the DARPA community in recent years .
Context after the citation:
The third version (VOYAGER) serves as an interface both with a recognizer and with a functioning database back-end (Zue et al. 1990). The VOYAGER system can answer a number of different types of questions concerning navigation within a city, as well as provide certain information about hotels, restaurants, libraries, etc., within the region. A fourth domain-specific version is under development for the ATIS (Air Travel Information System) task, which has recently been designated as the new common task for the DARPA community. | Uses | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1456 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
related work
Context before the citation:
Another line of research that is correlated with ours is recognition of agreement/disagreement (Misra and Walker, 2013; Yin et al., 2012; Abbott et al., 2011; Andreas et al., 2012; Galley et al., 2004; Hillard et al., 2003) and classification of stances (Walker et al., 2012; Somasundaran and Wiebe, 2010) in online forums. The corpus and guidelines will also be shared with the research community. Our work contributes a new principled method for building annotated corpora for online interactions.
Citation Sentence:
Another line of research that is correlated with ours is recognition of agreement/disagreement ( Misra and Walker , 2013 ; Yin et al. , 2012 ; Abbott et al. , 2011 ; Andreas et al. , 2012 ; Galley et al. , 2004 ; Hillard et al. , 2003 ) and classification of stances ( Walker et al. , 2012 ; Somasundaran and Wiebe , 2010 ) in online forums .
Context after the citation:
For future work, we can utilize textual features (contextual, dependency, discourse markers), relevant multiword expressions and topic modeling (Mukherjee and Liu, 2013), and thread structure (Murakami and Raymond, 2010; Agrawal et al., 2003) to improve the Agree/Disagree classification accuracy. Recently, Cabrio and Villata (2013) proposed a new direction of argumentative analysis where the authors show how arguments are associated with Recognizing Textual Entailment (RTE) research. They utilized RTE approach to detect the relation of support/attack among arguments (entailment expresses a âsupportâ and contradiction expresses an âattackâ) on a dataset of arguments collected from online debates (e.g., Debatepedia). | CompareOrContrast | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1457 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
None
Context before the citation:
The studies presented by Goldwater and Griffiths (2007) and Johnson (2007) differed in the number of states that they used. accuracy. Then we use the dynamic programming sampler de-
Citation Sentence:
The studies presented by Goldwater and Griffiths ( 2007 ) and Johnson ( 2007 ) differed in the number of states that they used .
Context after the citation:
Goldwater and Griffiths (2007) evaluated against the reduced tag set of 17 tags developed by Smith and Eisner (2005), while Johnson (2007) evaluated against the full Penn Treebank tag set. We ran all our estimators in both conditions here (thanks to Noah Smith for supplying us with his tag set). Also, the studies differed in the size of the corpora used. The largest corpus that Goldwater and Griffiths (2007) studied contained 96,000 words, while Johnson (2007) used all of the 1,173,766 words in the full Penn WSJ treebank. | CompareOrContrast | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1458 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
method
Context before the citation:
A number of alignment techniques have been proposed, varying from statistical methods (Brown et al., 1991; Gale and Church, 1991) to lexical methods (Kay and Roscheisen, 1993; Chen, 1993). Aligning English-Chinese parallel texts is already very difficult because of the great differences in the syntactic structures and writing systems of the two languages. Some are highly parallel and easy to align while others can be very noisy.
Citation Sentence:
A number of alignment techniques have been proposed , varying from statistical methods ( Brown et al. , 1991 ; Gale and Church , 1991 ) to lexical methods ( Kay and Roscheisen , 1993 ; Chen , 1993 ) .
Context after the citation:
The method we adopted is that of Simard et al. (1992). Because it considers both length similarity and cognateness as alignment criteria, the method is more robust and better able to deal with noise than pure length-based methods. Cognates are identical sequences of characters in corresponding words in two languages. They are commonly found in English and French. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1459 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
None
Context before the citation:
One area of current interest concerns the left-to-right arrangement of premodifying adjectives within an NP (e.g., Shaw and Hatzivassiloglou 1999; Malouf 2000). A realization phase, during which the choice between base, superlative, and comparative forms is made, among other things. An inference phase, during which the list L is transformed; 3.
Citation Sentence:
One area of current interest concerns the left-to-right arrangement of premodifying adjectives within an NP ( e.g. , Shaw and Hatzivassiloglou 1999 ; Malouf 2000 ) .
Context after the citation:
Work in this area is often based on assigning adjectives to a small number of categories (e.g., Precentral, Central, Postcentral, and Prehead), which predict adjectivesâ relative position. Interestingly, vague properties tend to be realized before others. Quirk et al. (1985), for example, report that âadjectives denoting size, length, and height normally precede other nonderived adjectivesâ (e.g., the small round table is usually preferred to the round small table). Semantically, this does not come as a surprise. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:146 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
conclusion
Context before the citation:
We would like to use features that look at wide context on the input side, which is inexpensive (Jiampojamarn et al., 2007). In future work, we would like to identify a set of features, latent variables, and training methods that port well across languages and string-transduction tasks. The modelâs errors are often reasonable misgeneralizations (e.g., assume regular conjugation where irregular would have been correct), and it is able to use even a small number of latent variables (including the latent alignment) to capture useful linguistic properties.
Citation Sentence:
We would like to use features that look at wide context on the input side , which is inexpensive ( Jiampojamarn et al. , 2007 ) .
Context after the citation:
Latent variables we wish to consider are an increased number of word classes; more flexible regionsâsee Petrov et al. (2007) on learning a state transition diagram for acoustic regions in phone recognitionâand phonological features and syllable boundaries. Indeed, our local log-linear features over several aligned latent strings closely resemble the soft constraints used by phonologists (Eisner, 1997). Finally, rather than define a fixed set of feature templates as in Fig. 2, we would like to refine empirically useful features during training, resulting in language-specific backoff patterns and adaptively sized n-gram windows. Many of these enhancements will increase the computational burden, and we are interested in strategies to mitigate this, including approximation methods. | FutureWork | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1460 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
None
Context before the citation:
Much of the earlier work in anaphora resolution heavily exploited domain and linguistic knowledge (Sidner 1979; Carter 1987; Rich and LuperFoy 1988; Carbonell and Brown 1988), which was difficult both to represent and to process, and which required considerable human input. Last, but not least, application-driven research in areas such as automatic abstracting and information extraction independently highlighted the importance of anaphora and coreference resolution, boosting research in this area. The drive toward corpus-based robust NLP solutions further stimulated interest in alternative and/or data-enriched approaches.
Citation Sentence:
Much of the earlier work in anaphora resolution heavily exploited domain and linguistic knowledge ( Sidner 1979 ; Carter 1987 ; Rich and LuperFoy 1988 ; Carbonell and Brown 1988 ) , which was difficult both to represent and to process , and which required considerable human input .
Context after the citation:
However, the pressing need for the development of robust and inexpensive solutions to meet the demands of practical NLP systems encouraged many researchers to move away from extensive domain and linguistic knowledge and to embark instead upon knowledge-poor anaphora resolution strategies. A number of proposals in the 1990s deliberately limited the extent to which they relied on domain and/or linguistic knowledge and reported promising results in knowledge-poor operational environments (Dagan and Itai 1990, 1991; Lappin and Leass 1994; Nasukawa 1994; Kennedy and Boguraev 1996; Williams, Harvey, and Preston 1996; Baldwin 1997; Mitkov 1996, 1998b). The drive toward knowledge-poor and robust approaches was further motivated by the emergence of cheaper and more reliable corpus-based NLP tools such as partof-speech taggers and shallow parsers, alongside the increasing availability of corpora and other NLP resources (e.g., ontologies). In fact, the availability of corpora, both raw and annotated with coreferential links, provided a strong impetus to anaphora resolu- | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1461 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
introduction
Context before the citation:
Experiments on Chinese SRL (Xue and Palmer 2005, Xue 2008) reassured these findings. For semantic analysis, developing features that capture the right kind of information is crucial. They found out that different features suited for different sub tasks of SRL, i.e. semantic role identification and classification.
Citation Sentence:
Experiments on Chinese SRL ( Xue and Palmer 2005 , Xue 2008 ) reassured these findings .
Context after the citation:
In this paper, we mainly focus on the semantic role classification (SRC) process. With the findings about the linguistic discrepancy of different semantic role groups, we try to build a 2-step semantic role classifier with hierarchical feature selection strategy. That means, for different sub tasks, different models will be trained with different features. The purpose of this strategy is to capture the right kind of information of different semantic role groups. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1462 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
introduction
Context before the citation:
This is then generalized, following a methodology based on Block (2000), to generate the âgeneralized marker lexicon.â First, the phrasal lexicon is segmented using the marker hypothesis to produce a marker lexicon. From this initial resource, we subsequently derive a number of different databases that together allow many new input sentences to be translated that it would not be possible to translate in other systems.
Citation Sentence:
This is then generalized , following a methodology based on Block ( 2000 ) , to generate the `` generalized marker lexicon . ''
Context after the citation:
Finally, as a result of the 3 We refer the interested reader to the excellent and comprehensive bibliography on parallel text processing available at http://www.up.univ-mrs.fr/Ëveronis/biblios/ptp.htm. methodology chosen, we automatically derive a fourth resource, namely, a âword-level lexicon.â | Uses | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1463 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
None
Context before the citation:
A number of speech understanding systems have been developed during the past fifteen years (Barnett et al. 1980, Dixon and Martin 1979, Erman et al. 1980, Haton and Pierrel 1976, Lea 1980, Lowerre and Reddy 1980, Medress 1980, Reddy 1976, Walker 1978, and Wolf and Woods 1980).
Citation Sentence:
A number of speech understanding systems have been developed during the past fifteen years ( Barnett et al. 1980 , Dixon and Martin 1979 , Erman et al. 1980 , Haton and Pierrel 1976 , Lea 1980 , Lowerre and Reddy 1980 , Medress 1980 , Reddy 1976 , Walker 1978 , and Wolf and Woods 1980 ) .
Context after the citation:
Most of these efforts concentrated on the interaction between low level information sources from a speech recognizer and a natural language processor to discover the meaning of an input sentence. While some of these systems did exhibit expectation capabilities at the sentence level, none acquired dialogues of the kind described here for the sake of dialogue level expectation and error correction. A detailed description of the kinds of expectation mechanisms appearing in these systems appears in Fink (1983). The problem of handling ill-formed input has been studied by Carbonell and Hayes (1983), Granger (1983), Jensen et al. (1983), Kwasny and Sondheimer (1981), Riesbeck and Schank (1976), Thompson (1980), Weischedel and Black (1980), and Weischedel and Sondheimer (1983). | CompareOrContrast | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1464 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
introduction
Context before the citation:
Our proposed method is based on the automatically acquired paraphrase dictionary described in Callison-Burch (2008), in which the application of paraphrases from the dictionary encodes secret bits. linguistic in nature, rather than dealing with superficial properties of the text, e.g. the amount of white space between words (Por et al., 2008). 2The message may have been encrypted initially also, as in the figure, but this is not important in this paper; the key point is that the hidden message is a sequence of bits.
Citation Sentence:
Our proposed method is based on the automatically acquired paraphrase dictionary described in Callison-Burch ( 2008 ) , in which the application of paraphrases from the dictionary encodes secret bits .
Context after the citation:
One advantage of the dictionary is that it has wide coverage, being automatically extracted; however, a disadvantage is that it contains many paraphrases which are either inappropriate, or only appropriate in certain contexts. Since we require any changes to be imperceptible to a human observer, it is crucial to our system that any uses of paraphrasing are grammatical and retain the meaning of the original cover text. In order to test the grammaticality and meaning preserving nature of a paraphrase, we employ a simple technique based on checking whether the contexts containing the paraphrase are in the Google ngram corpus. This technique is based on the simple hypothesis that, if the paraphrase in context has been used many times before on the web, then it is an appropriate use. | Uses | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1465 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
method
Context before the citation:
For example, frequent words are translated less consistently than rare words (Melamed, 1997). More accurate models can be induced by taking into account various features of the linked tokens. In the basic word-to-word model, the hidden parameters A+ and Adepend only on the distributions of link frequencies generated by the competitive linking algorithm.
Citation Sentence:
For example , frequent words are translated less consistently than rare words ( Melamed , 1997 ) .
Context after the citation:
To account for this difference, we can estimate separate values of A+ and Afor different ranges of n(â,v). Similarly, the hidden parameters can be conditioned on the linked parts of speech. Word order can be taken into account by conditioning the hidden parameters on the relative positions of linked word tokens in their respective sentences. Just as easily, we can model links that coincide with entries in a pre-existing translation lexicon separately | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1466 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
introduction
Context before the citation:
Over the last decade there has been a lot of interest in developing tutorial dialogue systems that understand student explanations (Jordan et al., 2006; Graesser et al., 1999; Aleven et al., 2001; Buckley and Wolska, 2007; Nielsen et al., 2008; VanLehn et al., 2007), because high percentages of selfexplanation and student contentful talk are known to be correlated with better learning in humanhuman tutoring (Chi et al., 1994; Litman et al., 2009; Purandare and Litman, 2008; Steinhauser et al., 2007).
Citation Sentence:
Over the last decade there has been a lot of interest in developing tutorial dialogue systems that understand student explanations ( Jordan et al. , 2006 ; Graesser et al. , 1999 ; Aleven et al. , 2001 ; Buckley and Wolska , 2007 ; Nielsen et al. , 2008 ; VanLehn et al. , 2007 ) , because high percentages of selfexplanation and student contentful talk are known to be correlated with better learning in humanhuman tutoring ( Chi et al. , 1994 ; Litman et al. , 2009 ; Purandare and Litman , 2008 ; Steinhauser et al. , 2007 ) .
Context after the citation:
However, most existing systems use pre-authored tutor responses for addressing student errors. The advantage of this approach is that tutors can devise remediation dialogues that are highly tailored to specific misconceptions many students share, providing step-by-step scaffolding and potentially suggesting additional problems. The disadvantage is a lack of adaptivity and generality: students often get the same remediation for the same error regardless of their past performance or dialogue context, as it is infeasible to author a different remediation dialogue for every possible dialogue state. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1467 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
None
Context before the citation:
These knowledge sources were effectively used to build a state-of-the-art WSD program in one of our prior work (Lee and Ng, 2002). The knowledge sources we use include parts-of-speech, local collocations, and surrounding words. For our experiments, we use naive Bayes as the learning algorithm.
Citation Sentence:
These knowledge sources were effectively used to build a state-of-the-art WSD program in one of our prior work ( Lee and Ng , 2002 ) .
Context after the citation:
In performing WSD with a naive Bayes classifier, the sense s assigned to an example with features f1, ... , fn is chosen so as to maximize: In our domain adaptation study, we start with a WSD system built using training examples drawn from BC. We then investigate the utility of adding additional in-domain training data from WSJ. In the baseline approach, the additional WSJ examples are randomly selected. With active learning (Lewis and Gale, 1994), we use uncertainty sampling as shown | Extends | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1468 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
introduction
Context before the citation:
Another line of research approaches grounded language knowledge by augmenting distributional approaches of word meaning with perceptual information (Andrews et al., 2009; Steyvers, 2010; Feng and Lapata, 2010b; Bruni et al., 2011; Silberer and Lapata, 2012; Johns and Jones, 2012; Bruni et al., 2012a; Bruni et al., 2012b; Silberer et al., 2013). Some efforts have tackled tasks such as automatic image caption generation (Feng and Lapata, 2010a; Ordonez et al., 2011), text illustration (Joshi et al., 2006), or automatic location identification of Twitter users (Eisenstein et al., 2010; Wing and Baldridge, 2011; Roller et al., 2012). Others provide automatic mappings of natural language instructions to executable actions, such as interpreting navigation directions (Chen and Mooney, 2011) or robot commands (Tellex et al., 2011; Matuszek et al., 2012).
Citation Sentence:
Another line of research approaches grounded language knowledge by augmenting distributional approaches of word meaning with perceptual information ( Andrews et al. , 2009 ; Steyvers , 2010 ; Feng and Lapata , 2010b ; Bruni et al. , 2011 ; Silberer and Lapata , 2012 ; Johns and Jones , 2012 ; Bruni et al. , 2012a ; Bruni et al. , 2012b ; Silberer et al. , 2013 ) .
Context after the citation:
Although these approaches have differed in model definition, the general goal in this line of research has been to enhance word meaning with perceptual information in order to address one of the most common criticisms of distributional semantics: that the âmeaning of words is entirely given by other wordsâ (Bruni et al., 2012b). In this paper, we explore various ways to integrate new perceptual information through novel computational modeling of this grounded knowledge into a multimodal distributional model of word meaning. The model we rely on was originally developed by Andrews et al. (2009) and is based on a generalization of Latent Dirichlet Allocation. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1469 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
method
Context before the citation:
Models of translational equivalence that are ignorant of indirect associations have "a tendency ... to be confused by collocates" (Dagan et al., 1993). The arrow connecting vk and uk±i in Figure 1 represents an indirect association, since the association between vk and uk±i arises only by virtue of the association between each of them and uk . Then vk and uk+i will also co-occur more often than expected by chance.
Citation Sentence:
Models of translational equivalence that are ignorant of indirect associations have `` a tendency ... to be confused by collocates '' ( Dagan et al. , 1993 ) .
Context after the citation:
Fortunately, indirect associations are usually not difficult to identify, because they tend to be weaker than the direct associations on which they are based (Melamed, 1996c). The majority of indirect associations can be filtered out by a simple competition heuristic: Whenever several word tokens ui in one half of the bitext co-occur with a particular word token v in the other half of the bitext, the word that is most likely to be v's translation is the one for which the likelihood L(u, v) of translational equivalence is highest. The competitive linking algorithm implements this heuristic: 1. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:147 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
introduction
Context before the citation:
Due to advances in statistical syntactic parsing techniques (Collins, 1997; Charniak, 2001), attention has recently shifted towards the harder question of analyzing the meaning of natural language sentences. The understanding of natural language text includes not only analysis of syntactic structure, but also of semantic content.
Citation Sentence:
Due to advances in statistical syntactic parsing techniques ( Collins , 1997 ; Charniak , 2001 ) , attention has recently shifted towards the harder question of analyzing the meaning of natural language sentences .
Context after the citation:
A common lexical semantic representation in the computational linguistics literature is a frame-based model where syntactic arguments are associated with various semantic roles (essentially frame slots). Verbs are viewed as simple predicates over their arguments. This approach has its roots in Fillmoreâs Case Grammar (1968), and serves as the foundation for two current large-scale semantic annotation projects: FrameNet (Baker et al., 1998) and PropBank (Kingsbury et al., 2002). Underlying the semantic roles approach is a lexicalist assumption, that is, each verbâs lexical entry completely encodes (more formally, projects) its syntactic and semantic structures. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1470 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
conclusion
Context before the citation:
This is where robust syntactic systems like SATZ (Palmer and Hearst 1997) or the POS tagger reported in Mikheev (2000), which do not heavily rely on word capitalization and are not sensitive to document length, have an advantage. We noted in Section 8 that very short documents of one to three sentences also present a difficulty for our approach. optical character readerâgenerated texts.
Citation Sentence:
This is where robust syntactic systems like SATZ ( Palmer and Hearst 1997 ) or the POS tagger reported in Mikheev ( 2000 ) , which do not heavily rely on word capitalization and are not sensitive to document length , have an advantage .
Context after the citation:
Our DCA uses information derived from the entire document and thus can be used as a complement to approaches based on the local context. When we incorporated the DCA system into a POS tagger (Section 8), we measured a 30â35% cut in the error rate on proper-name identification in comparison to DCA or the POS-tagging approaches alone. This in turn enabled better tagging of sentence boundaries: a 0.20% error rate on the Brown corpus and a 0.31% error rate on the WSJ corpus, which corresponds to about a 20% cut in the error rate in comparison to DCA or the POS-tagging approaches alone. We also investigated the portability of our approach to other languages and obtained encouraging results on a corpus of news in Russian. | CompareOrContrast | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1471 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
method
Context before the citation:
category relationships from the weak supervision: the tag dictionary and raw corpus (Garrette and Baldridge, 2012; Garrette et al., 2015).4 This procedure attempts to automatically estimate the frequency of each word/tag combination by dividing the number of raw-corpus occurrences of each word in the dictionary evenly across all of its associated tags. We employ the same procedure as our previous work for setting the terminal production prior dis-
Citation Sentence:
category relationships from the weak supervision : the tag dictionary and raw corpus ( Garrette and Baldridge , 2012 ; Garrette et al. , 2015 ) .4 This procedure attempts to automatically estimate the frequency of each word/tag combination by dividing the number of raw-corpus occurrences of each word in the dictionary evenly across all of its associated tags .
Context after the citation:
These counts are then combined with estimates of the âopennessâ of each tag in order to assess its likelihood of appearing with new words. | Uses | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1472 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
experiments
Context before the citation:
WIT has been implemented in Common Lisp and C on UNIX, and we have built several experimental and demonstration dialogue systems using it, including a meeting room reservation system (Nakano et al., 1999b), a video-recording programming system, a schedule management system (Nakano et al., 1999a), and a weather infomiation system (Dohsaka et al., 2000). $ Implementation This makes it easy to find errors in the domain specifications.
Citation Sentence:
WIT has been implemented in Common Lisp and C on UNIX , and we have built several experimental and demonstration dialogue systems using it , including a meeting room reservation system ( Nakano et al. , 1999b ) , a video-recording programming system , a schedule management system ( Nakano et al. , 1999a ) , and a weather infomiation system ( Dohsaka et al. , 2000 ) .
Context after the citation:
The meeting room reservation system has vocabulary of about 140 words, around 40 phrase structure rules, nine attributes in the semantic frame, and around 100 speech files. A sample dialogue between this system and a naive user is shown in Figure 2. This system employs H'TK as the speech recognition engine. The weather information system can answer the user's questions about weather forecasts in Japan. | Extends | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1473 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
None
Context before the citation:
Lexical functional grammar (Kaplan and Bresnan 1982; Bresnan 2001; Dalrymple 2001) is a member of the family of constraint-based grammars.
Citation Sentence:
Lexical functional grammar ( Kaplan and Bresnan 1982 ; Bresnan 2001 ; Dalrymple 2001 ) is a member of the family of constraint-based grammars .
Context after the citation:
It posits minimally two levels of syntactic representation:2 c(onstituent)-structure encodes details of surface syntactic constituency, whereas f(unctional)-structure expresses abstract syntactic information about predicateâargumentâmodifier relations and certain morphosyntactic properties such as tense, aspect, and case. C-structure takes the form of phrase structure trees and is defined in terms of CFG rules and lexical entries. F-structure is produced from functional annotations on the nodes of the c-structure and implemented in terms of recursive feature structures (attributeâvalue matrices). This is exemplified by the analysis of the string The inquiry soon focused on the judge (wsj 0267 72) using the grammar in Figure 1, which results in the annotated c-structure and f-structure in Figure 2. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1474 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
related work
Context before the citation:
Sarkar and Zeman (2000) present an approach to learn previously unknown frames for Czech from the Prague Dependency Bank (Hajic As Kinyon and Prolo (2002) does not include an evaluation, currently it is impossible to say how effective their technique is. In general, argumenthood was preferred over adjuncthoood.
Citation Sentence:
Sarkar and Zeman ( 2000 ) present an approach to learn previously unknown frames for Czech from the Prague Dependency Bank ( Hajic
Context after the citation:
1998). Czech is a language with a freer word order than English and so configurational information cannot be relied upon. In a dependency tree, the set of all dependents of the verb make up a so-called observed frame, whereas a subcategorization frame contains a subset of the dependents in the observed frame. Finding subcategorization frames involves filtering adjuncts from the observed frame. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1475 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
experiments
Context before the citation:
A companion paper describes the evaluation process and results in further detail (Chu-Carroll and Nickerson, 2000). In this section we summarize these experiments and their results. We compared MIMIC with two control systems: MIMIC-SI, a system-initiative version of MIMIC in which the system retains both initiatives throughout the dialogue, and MIMIC-MI, a nonadaptive mixed-initiative version of MIMIC that resembles the behavior of many existing dialogue systems.
Citation Sentence:
A companion paper describes the evaluation process and results in further detail ( Chu-Carroll and Nickerson , 2000 ) .
Context after the citation:
Each experiment involved eight users interacting with MIMIC and MIMIC-SI or MIMIC-MI to perform a set of tasks, each requiring the user to obtain specific movie information. User satisfaction was assessed by asking the subjects to fill out a questionnaire after interacting with each version of the system. Furthermore, a number of performance features, largely based on the PARADISE dialogue evaluation scheme (Walker et al., 1997), were automatically logged, derived, or manually annotated. In addition, we logged the cues automatically detected in each user utterance, as well as the initiative distribution for each turn and the dialogue acts selected to generate each system response. | Extends | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1476 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
experiments
Context before the citation:
SNoW (Carleson et al., 1999; Roth, 1998) is a multi-class classifier that is specifically tailored for learning in domains in which the potential number of information sources (features) taking part in decisions is very large, of which NLP is a principal example. The shallow parser used is the SNoW-based CSCL parser (Punyakanok and Roth, 2001; Munoz et al., 1999). The reported results for the full parse tree (on section 23) are recall/precision of 88.1/87.5 (Collins, 1997).
Citation Sentence:
SNoW ( Carleson et al. , 1999 ; Roth , 1998 ) is a multi-class classifier that is specifically tailored for learning in domains in which the potential number of information sources ( features ) taking part in decisions is very large , of which NLP is a principal example .
Context after the citation:
It works by learning a sparse network of linear functions over a pre-defined or incrementally learned feature space. Typically, SNoW is used as a classifier, and predicts using a winner-take-all mechanism over the activation value of the target classes. However, in addition to the prediction, it provides a reliable confidence level in the prediction, which enables its use in an inference algorithm that combines predictors to produce a coherent inference. Indeed, in CSCL (constraint satisfaction with classifiers), SNoW is used to learn several different classifiers â each detects the beginning or end of a phrase of some type (noun phrase, verb phrase, etc.). | Uses | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1477 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
introduction
Context before the citation:
Hence, enumerating morphological variants in a semi-automatically generated lexicon, such as proposed for French (Zweigenbaum et al., 2001), turns out to be infeasible, at least for German and related languages. The problem one faces from an IR point of view is that besides fairly standardized nominal compounds, which already form a regular part of the sublanguage proper, a myriad of ad hoc compounds are formed on the fly which cannot be anticipated when formulating a retrieval query though they appear in relevant documents. This problem becomes even more pressing for technical sublanguages, such as medical German (e.g., âBlut druck mess gerdtâ translates to âdevice for measuring blood pressureâ).
Citation Sentence:
Hence , enumerating morphological variants in a semi-automatically generated lexicon , such as proposed for French ( Zweigenbaum et al. , 2001 ) , turns out to be infeasible , at least for German and related languages .
Context after the citation:
Furthermore, medical terminology is characterized by a typical mix of Latin and Greek roots with the corresponding host language (e.g., German), often referred to as neo-classical compounding (McCray et al., 1988). While this is simply irrelevant for general-purpose morphological analyzers, dealing with such phenomena is crucial for any attempt to cope adequately with medical free-texts in an IR setting (Wolff, 1984). We here propose an approach to document retrieval which is based on the idea of segmenting query and document terms into basic subword units. Hence, this approach combines procedures for deflection, dederivation and decomposition. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1478 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
method
Context before the citation:
It is analogous to the step in other translation model induction algorithms that sets all probabilities below a certain threshold to negligible values (Brown et al., 1990; Dagan et al., 1993; Chen, 1996). This step significantly reduces the computational burden of the algorithm. Discard all likelihood scores for word types deemed unlikely to be mutual translations, i.e. all L(u, v) < 1.
Citation Sentence:
It is analogous to the step in other translation model induction algorithms that sets all probabilities below a certain threshold to negligible values ( Brown et al. , 1990 ; Dagan et al. , 1993 ; Chen , 1996 ) .
Context after the citation:
To retain word type pairs that are at least twice as likely to be mutual translations than not, the threshold can be raised to 2. Conversely, the threshold can be lowered to buy more coverage at the cost of a larger model that will converge more slowly. 2. Sort all remaining likelihood estimates L(u, v) from highest to lowest. | CompareOrContrast | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1479 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
None
Context before the citation:
Note that although our feature set was drawn primarily from our prior uncertainty detection experiments (Forbes-Riley and Litman, 2011a; Drummond and Litman, 2011), we have also experimented with other features, including state-of-theart acoustic-prosodic features used in the last Interspeech Challenges (Schuller et al., 2010; Schuller et al., 2009b) and made freely available in the openSMILE Toolkit (Florian et al., 2010). ⢠Lexical and Dialogue Features dialogue name and turn number question name and question depth ITSPOKE-recognized lexical items in turn ITSPOKE-labeled turn (in)correctness incorrect runs ⢠User Identifier Features: gender and pretest score deviation running totals and averages for all features
Citation Sentence:
Note that although our feature set was drawn primarily from our prior uncertainty detection experiments ( Forbes-Riley and Litman , 2011a ; Drummond and Litman , 2011 ) , we have also experimented with other features , including state-of-theart acoustic-prosodic features used in the last Interspeech Challenges ( Schuller et al. , 2010 ; Schuller et al. , 2009b ) and made freely available in the openSMILE Toolkit ( Florian et al. , 2010 ) .
Context after the citation:
To date, however, these features have only decreased the crossvalidation performance of our models.8 While some of our features are tutoring-specific, these have similar counterparts in other applications (i.e., answer (in)correctness corresponds to a more general notion of âresponse appropriatenessâ in other domains, while pretest score corresponds to the general notion of domain expertise). Moreover, all of our features are fully automatic and available in real-time, so that the model can be directly implemented and deployed. To that end, we now describe the results of our intrinsic and extrinsic evaluations of our DISE model, aimed at determining whether it is ready to be evaluated with real users. | Extends | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:148 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
None
Context before the citation:
They proved to be useful in a number of NLP applications such as natural language generation (Iordanskaja et al., 1991), multidocument summarization (McKeown et al., 2002), automatic evaluation of MT (Denkowski and Lavie, 2010), and TE (Dinu and Wang, 2009). Paraphrase tables (PPHT) contain pairs of corresponding phrases in the same language, possibly associated with probabilities. 1http://www.statmt.org/wmt10/
Citation Sentence:
They proved to be useful in a number of NLP applications such as natural language generation ( Iordanskaja et al. , 1991 ) , multidocument summarization ( McKeown et al. , 2002 ) , automatic evaluation of MT ( Denkowski and Lavie , 2010 ) , and TE ( Dinu and Wang , 2009 ) .
Context after the citation:
One of the proposed methods to extract paraphrases relies on a pivot-based approach using phrase alignments in a bilingual parallel corpus (Bannard and Callison-Burch, 2005). With this method, all the different phrases in one language that are aligned with the same phrase in the other language are extracted as paraphrases. After the extraction, pruning techniques (Snover et al., 2009) can be applied to increase the precision of the extracted paraphrases. In our work we used available2 paraphrase databases for English and Spanish which have been extracted using the method previously outlined. | Motivation | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1480 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
introduction
Context before the citation:
The results, which partly confirm those obtained on a smaller dataset in Paggio and Navarretta (2010), must be seen in light of the fact that our gesture annotation scheme comprises more fine-grained categories than most of the studies mentioned earlier for both head movements and face expressions. We find that prosodic features improve the classification of dialogue acts and that head gestures, where they occur, contribute to the semantic interpretation of feedback expressions. Our data are made up by a collection of eight video-recorded map-task dialogues in Danish, which were annotated with phonetic and prosodic information.
Citation Sentence:
The results , which partly confirm those obtained on a smaller dataset in Paggio and Navarretta ( 2010 ) , must be seen in light of the fact that our gesture annotation scheme comprises more fine-grained categories than most of the studies mentioned earlier for both head movements and face expressions .
Context after the citation:
The classification results improve, however, if similar categories such as head nods and jerks are collapsed into a more general category. In Section 2 we describe the multimodal Danish corpus. In Section 3, we describe how the prosody of feedback expressions is annotated, how their content is coded in terms of dialogue act, turn and agreement labels, and we provide inter-coder agreement measures. In Section 4 we account for the annotation of head gestures, including inter- | CompareOrContrast | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1481 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
None
Context before the citation:
We employ the idea of ultraconservative update (Crammer and Singer, 2003; Crammer et al., 2006) to propose two incremental methods for local training in Algorithm 2 as follows. Unlike the global method which performs tuning on the whole development set Dev + Di as in Algorithm 1, Wi can be incrementally learned by optimizing on Di based on Wb. Our goal is to find an optimal weight, denoted by Wi, which is a local weight and used for decoding the sentence ti.
Citation Sentence:
We employ the idea of ultraconservative update ( Crammer and Singer , 2003 ; Crammer et al. , 2006 ) to propose two incremental methods for local training in Algorithm 2 as follows .
Context after the citation:
Ultraconservative update is an efficient way to consider the trade-off between the progress made on development set Dev and the progress made on Di. It desires that the optimal weight Wi is not only close to the baseline weight Wb, but also achieves the low loss over the retrieved examples Di. The idea of ultraconservative update can be formalized as follows: where d(W, Wb) is a distance metric over a pair of weights W and Wb. | Uses | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1482 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
method
Context before the citation:
It has also been shown to be useful in joint inference of text with visual attributes obtained using visual classifiers (Silberer et al., 2013). Previously LDA has been successfully used to infer unsupervised joint topic distributions over words and feature norms together (Andrews et al., 2009; Silberer and Lapata, 2012). Our experiments are based on the multimodal extension of Latent Dirichlet Allocation developed by Andrews et al. (2009).
Citation Sentence:
It has also been shown to be useful in joint inference of text with visual attributes obtained using visual classifiers ( Silberer et al. , 2013 ) .
Context after the citation:
These multimodal LDA models (hereafter, mLDA) have been shown to be qualitatively sensible and highly predictive of several psycholinguistic tasks (Andrews et al., 2009). However, prior work using mLDA is limited to two modalities at a time. In this section, we describe bimodal mLDA and define two methods for extending it to three or more modalities. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1483 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
experiments
Context before the citation:
Gurevych (2006) reported a correlation of r=.69. Finkelstein et al. (2002) did not report inter-subject correlation for their larger dataset. Resnik (1995) reported a correlation of r=.9026.10 The results are not directly comparable, because he only used noun-noun pairs, words instead of concepts, a much smaller dataset, and measured semantic similarity instead of semantic relatedness.
Citation Sentence:
Gurevych ( 2006 ) reported a correlation of r = .69 .
Context after the citation:
Test subjects were trained students of computational linguistics, and word pairs were selected analytically. Evaluating the influence of using concept pairs instead of word pairs is complicated because word level judgments are not directly available. Therefore, we computed a lower and an upper bound for correlation coefficients. For the lower bound, we always selected the concept pair with highest standard deviation from each set of corresponding concept pairs. | CompareOrContrast | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1484 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
introduction
Context before the citation:
A more subtle example is weighted FSAs that approximate PCFGs (Nederhof, 2000; Mohri and Nederhof, 2001), or to extend the idea, weighted FSTs that approximate joint or conditional synchronous PCFGs built for translation. Arbitrary weights such as 2.7 may be assigned to arcs or sprinkled through a regexp (to be compiled into E:E/2.7 â)arcs). P(v, z) def = Ew,x,y P(v|w)P(w, x)P(y|x)P(z|y), implemented by composing 4 machines.6,7 There are also procedures for defining weighted FSTs that are not probabilistic (Berstel and Reutenauer, 1988).
Citation Sentence:
A more subtle example is weighted FSAs that approximate PCFGs ( Nederhof , 2000 ; Mohri and Nederhof , 2001 ) , or to extend the idea , weighted FSTs that approximate joint or conditional synchronous PCFGs built for translation .
Context after the citation:
These are parameterized by the PCFGâs parameters, but add or remove strings of the PCFG to leave an improper probability distribution. Fortunately for those techniques, an FST with positive arc weights can be normalized to make it jointly or conditionally probabilistic: ⢠An easy approach is to normalize the options at each state to make the FST Markovian. Unfortunately, the result may differ for equivalent FSTs that express the same weighted relation. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1485 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
method
Context before the citation:
As has been previously observed and exploited in the NLP literature (Pang and Lee, 2004; Agarwal and Bhattacharyya, 2005; Barzilay and Lapata, 2005), the above optimization function, unlike many others that have been proposed for graph or set partitioning, can be solved exactly in an provably efficient manner via methods for finding minimum cuts in graphs. A minimum-cost assignment thus represents an optimum way to classify the speech segments so that each one tends not to be put into the class that the individual-document classifier disprefers, but at the same time, highly associated speech segments tend not to be put in different classes. where c(s) is the âoppositeâ class from c(s).
Citation Sentence:
As has been previously observed and exploited in the NLP literature ( Pang and Lee , 2004 ; Agarwal and Bhattacharyya , 2005 ; Barzilay and Lapata , 2005 ) , the above optimization function , unlike many others that have been proposed for graph or set partitioning , can be solved exactly in an provably efficient manner via methods for finding minimum cuts in graphs .
Context after the citation:
In our view, the contribution of our work is the examination of new types of relationships, not the method by which such relationships are incorporated into the classification decision. | CompareOrContrast | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1486 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
introduction
Context before the citation:
WIT features an incremental understanding method (Nakano et al., 1999b) that makes it possible to build a robust and real-time system. for building spoken dialogue systems that integrate speech recognition, language understanding and generation, and speech output. This paper presents WIT', which is a toolkit 'WIT is an acronym of Workable spoken dialogue Inter-
Citation Sentence:
WIT features an incremental understanding method ( Nakano et al. , 1999b ) that makes it possible to build a robust and real-time system .
Context after the citation:
In addition, WIT compiles domain-dependent system specifications into internal knowledge sources so that building systems is easier. Although WIT requires more domaindependent specifications than finite-state-modelbased toolkits, WIT-based systems are capable of taking full advantage of language processing technology WIT has been implemented and used to build several spoken dialogue systems. In what follows, we overview WIT, explain its architecture, domain-dependent system specifications, and implementation, and then discuss its advantages and problems. | Uses | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1487 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
introduction
Context before the citation:
This choice is inspired by recent work on learning syntactic categories (Yatbaz et al., 2012), which successfully utilized such language models to represent word window contexts of target words. To capture syntagmatic patterns, we choose in this work standard n-gram language models as the basis for a concrete model implementing our scheme. This provides grounds to expect that such model has the potential to excel for verbs.
Citation Sentence:
This choice is inspired by recent work on learning syntactic categories ( Yatbaz et al. , 2012 ) , which successfully utilized such language models to represent word window contexts of target words .
Context after the citation:
However, we note that other richer types of language models, such as class-based (Brown et al., 1992) or hybrid (Tan et al., 2012), can be seamlessly integrated into our scheme. Our evaluations suggest that our model is indeed particularly advantageous for measuring semantic similarity for verbs, while maintaining comparable or better performance with respect to competitive baselines for nouns. | Motivation | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1488 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
related work
Context before the citation:
As for work on Arabic (MSA), results have been reported on the PATB (Kulick, Gabbard, and Marcus 2006; Diab 2007; Green and Manning 2010), the Prague Dependency Treebank (PADT) (Buchholz and Marsi 2006; Nivre 2008) and the CATiB (Habash and Roth 2009). Looking at Hebrew, a Semitic language related to Arabic, Tsarfaty and Simaâan (2007) report that extending POS and phrase structure tags with definiteness information helps unlexicalized PCFG parsing. We also find that the number feature helps for Arabic.
Citation Sentence:
As for work on Arabic ( MSA ) , results have been reported on the PATB ( Kulick , Gabbard , and Marcus 2006 ; Diab 2007 ; Green and Manning 2010 ) , the Prague Dependency Treebank ( PADT ) ( Buchholz and Marsi 2006 ; Nivre 2008 ) and the CATiB ( Habash and Roth 2009 ) .
Context after the citation:
Recently, Green and Manning (2010) analyzed the PATB for annotation consistency, and introduced an enhanced split-state constituency grammar, including labels for short idafa constructions and verbal or equational clauses. Nivre (2008) reports experiments on Arabic parsing using his MaltParser (Nivre et al. 2007), trained on the PADT. His results are not directly comparable to ours because of the different treebank representations, even though all the experiments reported here were performed using the MaltParser. Our results agree with previous work on Arabic and Hebrew in that marking the definite article is helpful for parsing. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1489 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
None
Context before the citation:
For this mention-pair coreference model Ï(u,v), we use the same set of features used in Bengtson and Roth (2008). Thus, we can construct a forest and the mentions in the same connected component (i.e., in the same tree) are co-referred. Here, yu,v = 1 iff mentions u,v are directly linked.
Citation Sentence:
For this mention-pair coreference model Ï ( u , v ) , we use the same set of features used in Bengtson and Roth ( 2008 ) .
Context after the citation:
fu,vyu,v, | Uses | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:149 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
related work
Context before the citation:
Other studies which view lR as a query generation process include Maron and Kuhns, 1960; Hiemstra and Kraaij, 1999; Ponte and Croft, 1998; Miller et al, 1999.
Citation Sentence:
Other studies which view lR as a query generation process include Maron and Kuhns , 1960 ; Hiemstra and Kraaij , 1999 ; Ponte and Croft , 1998 ; Miller et al , 1999 .
Context after the citation:
Our work has focused on cross-lingual retrieval. Many approaches to cross-lingual IR have been published. One common approach is using Machine Translation (MT) to translate the queries to the language of the documents or translate documents to the language of the queries (Gey et al, 1999; Oard, 1998). For most languages, there are no MT systems at all. | CompareOrContrast | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1490 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
method
Context before the citation:
In Section 5, we discuss the difficulties associated with such user studies, and describe a human-based evaluation we conducted for a small subset of the responses generated by our system (Marom and Zukerman 2007b). Quality is a subjective measure, which is best judged by the users of the system (i.e., the help-desk customers or operators). Our evaluation is performed by measuring the quality of the generated responses.
Citation Sentence:
In Section 5 , we discuss the difficulties associated with such user studies , and describe a human-based evaluation we conducted for a small subset of the responses generated by our system ( Marom and Zukerman 2007b ) .
Context after the citation:
However, our more comprehensive evaluation is an automatic one that treats the responses generated by the help-desk operators as model responses, and performs text-based comparisons between the model responses and the automatically generated ones. We employ 10-fold cross-validation, where we split each data set in the corpus into 10 test sets, each comprising 10% of the e-mail dialogues; the remaining 90% of the dialogues constitute the training set. For each of the cross-validation folds, the responses generated for the requests in the test split are compared against the actual responses generated by help-desk operators for these requests. Although this method of assessment is less informative than human-based evaluations, it enables us to evaluate the performance of our system with substantial amounts of data, and produce representative results for a large corpus such as ours. | Uses | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1491 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
related work
Context before the citation:
Hook (1981) considers the second verb V2 as an aspectual complex comparable to the auxiliaries. A plethora of works has been done to provide linguistic explanations on the formation of such word, yet none so far has led to any consensus. Compound verbs are a special phenomena that are abundantly found in IndoEuropean languages like Indian languages.
Citation Sentence:
Hook ( 1981 ) considers the second verb V2 as an aspectual complex comparable to the auxiliaries .
Context after the citation:
Butt (1993) argues CV formations in Hindi and Urdu are either morphological or syntactical and their formation take place at the argument structure. Bashir (1993) tried to construct a semantic analysis based on âpreparedâ and âunprepared mindâ. Similar findings have been proposed by Pandharipande (1993) that points out V1 and V2 are paired on the basis of their semantic compatibility, which is subject to syntactic constraints. Paul (2004) tried to represent Bangla CVs in terms of HPSG formalism. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1492 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
method
Context before the citation:
2We could just as easily use other symmetric "association" measures, such as 02 (Gale & Church, 1991) or the Dice coefficient (Smadja, 1992). However, n(u) = Ev n(u,v), which is not the same as the frequency of u, because each token of u can co-occur with several differentv's. If uk and vk are indeed mutual translations, then their tendency to 'The co-occurrence frequency of a word type pair is simply the number of times the pair co-occurs in the corpus.
Citation Sentence:
2We could just as easily use other symmetric `` association '' measures , such as 02 ( Gale & Church , 1991 ) or the Dice coefficient ( Smadja , 1992 ) .
Context after the citation:
co-occur is called a direct association. Now, suppose that uk and uk±i often co-occur within their language. Then vk and uk+i will also co-occur more often than expected by chance. The arrow connecting vk and uk±i in Figure 1 represents an indirect association, since the association between vk and uk±i arises only by virtue of the association between each of them and uk . | CompareOrContrast | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1493 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
None
Context before the citation:
In this situation, Brown et al. (1993b, 293) recommend "evaluating the expectations using only a single, probable alignment." This is why we must make do with approximations to the EM algorithm. Barring such a decomposition method, the MLE approach is infeasible.
Citation Sentence:
In this situation , Brown et al. ( 1993b , 293 ) recommend `` evaluating the expectations using only a single , probable alignment . ''
Context after the citation:
The single most probable assignment Amax is the maximum a posteriori (MAP) assignment: If we represent the bitext as a bipartite graph and weight the edges by log trans(u, v), then the right-hand side of Equation 26 is an instance of the weighted maximum matching problem and Amax is its solution. For a bipartite graph G = (V1 U V2, E), with v = 1V1 U V21 and e = El, the lowest currently known upper bound on the computational complexity of this problem is 0(ve + v2 log v) (Ahuja, Magnati, and Orlin 1993, 500). Although this upper bound is polynomial, it is still too expensive for typical bitexts.1° Subsection 5.1.2 describes a greedy approximation to the MAP approximation. | Motivation | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1494 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
related work
Context before the citation:
The use of the web as a corpus for teaching and research on language has been proposed a number of times (Kilgarriff, 2001; Robb, 2003; Rundell, 2000; Fletcher, 2001, 2004b) and received a special issue of the journal Computational Linguistics (Kilgarriff and Grefenstette, 2003). This corpus annotation bottleneck becomes even more problematic for voluminous data sets drawn from the web. Larger systems to support multiple document tagging processes would require resources that cannot be realistically provided by existing single-server systems.
Citation Sentence:
The use of the web as a corpus for teaching and research on language has been proposed a number of times ( Kilgarriff , 2001 ; Robb , 2003 ; Rundell , 2000 ; Fletcher , 2001 , 2004b ) and received a special issue of the journal Computational Linguistics ( Kilgarriff and Grefenstette , 2003 ) .
Context after the citation:
Studies have used several different methods to mine web data. Turney (2001) extracts word co-occurrence probabilities from unlabelled text collected from a web crawler. Baroni and Bernardini (2004) built a corpus by iteratively searching Google for a small set of seed terms. Prototypes of Internet search engines for linguists, corpus linguists and lexicographers have been proposed: WebCorp (Kehoe and Renouf, 2002), KWiCFinder (Fletcher, 2004a) and the Linguistâs Search Engine (Kilgarriff, 2003; Resnik and Elkiss, 2003). | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1495 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
introduction
Context before the citation:
⢠Learnability (Zernik and Dyer 1987) ⢠Text generation (Hovy 1988; Milosavljevic, Tulloch, and Dale 1996) ⢠Speech generation (Rayner and Carter 1997) ⢠Localization (Sch¨aler 1996) More specifically, the notion of the phrasal lexicon (used first by Becker 1975) has been used successfully in a number of areas: Accordingly, they generate lexical correspondences by means of co-occurrence measures and string similarity metrics.
Citation Sentence:
⢠Learnability ( Zernik and Dyer 1987 ) ⢠Text generation ( Hovy 1988 ; Milosavljevic , Tulloch , and Dale 1996 ) ⢠Speech generation ( Rayner and Carter 1997 ) ⢠Localization ( Sch ¨ aler 1996 )
Context after the citation:
More recently, Simard and Langlais (2001) have proposed the exploitation of TMs at a subsentential level, while Carl, Way, and Sch¨aler (2002) and Sch¨aler, Way, and Carl (2003, pages 108â109) describe how phrasal lexicons might come to occupy a central place in a future hybrid integrated translation environment. This, they suggest, may result in a paradigm shift from TM to EBMT via the phrasal lexicon: Translators are on the whole wary of MT technology, but once subsentential alignment is enabled, translators will become aware of the benefits to be gained from (source, target) phrasal segments, and from there they suggest that âit is a reasonably short step to enabling an automated solution via the recombination element of EBMT systems such as those described in [Carl and Way 2003].â In this section, we describe how the memory of our EBMT system is seeded with a set of translations obtained from Web-based MT systems. From this initial resource, we subsequently derive a number of different databases that together allow many new input sentences to be translated that it would not be possible to translate in other systems. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1496 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
experiments
Context before the citation:
Following our previous work on stance classification (Hasan and Ng, 2013c), we employ three types of features computed based on the frame-semantic parse of each sentence in a post obtained from SEMAFOR (Das et al., 2010). While dependencybased features capture the syntactic dependencies, frame-semantic features encode the semantic representation of the concepts in a sentence. Frame-semantic features.
Citation Sentence:
Following our previous work on stance classification ( Hasan and Ng , 2013c ) , we employ three types of features computed based on the frame-semantic parse of each sentence in a post obtained from SEMAFOR ( Das et al. , 2010 ) .
Context after the citation:
Frame-word interaction features encode whether two words appear in different elements of the same frame. Hence, each frame-word interaction feature consists of (1) the name of the frame f from which it is created, and (2) an unordered word pair in which the words are taken from two frame elements of f. A frame-pair feature is represented as a word pair corresponding to the names of two frames and encodes whether the target word of the first frame appears within an element of the second frame. Finally, frame ngram features are a variant of word n-grams. | Extends | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1497 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
introduction
Context before the citation:
However, the literature on Linguistic Steganography, in which linguistic properties of a text are modified to hide information, is small compared with other media (Bergmair, 2007). Given the ubiquitous nature of natural languages and electronic text, text is an obvious medium to consider. Since the difference between 11111111 and 11111110 in the value for red/green/blue intensity is likely to be undetectable by the human eye, the LSB can be used to hide information other than colour, without being perceptable by a human observer.1 A key question for any steganography system is the choice of cover medium.
Citation Sentence:
However , the literature on Linguistic Steganography , in which linguistic properties of a text are modified to hide information , is small compared with other media ( Bergmair , 2007 ) .
Context after the citation:
The likely reason is that it is easier to make changes to images and other nonlinguistic media which are undetectable by an observer. Language has the property that even small local changes to a text, e.g. replacing a word by a word with similar meaning, may result in text which is anomalous at the document level, or anomalous with respect to the state of the world. Hence finding linguistic transformations which can be applied reliably and often is a challenging problem for Linguistic Steganography. In this paper we focus on steganography rather than watermarking, since we are interested in the requirement that any changes to a text be imperceptible to an observer. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1498 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
introduction
Context before the citation:
This has been reported for other languages, too, dependent on the generality of the chosen approach (J¨appinen and Niemist¨o, 1988; Choueka, 1990; Popovic and Willett, 1992; Ekmekc¸ioglu et al., 1995; Hedlund et al., 2001; Pirkola, 2001). mers (Lovins, 1968; Porter, 1980) demonstrably improve retrieval performance. For English, known for its limited number of inflection patterns, lexicon-free general-purpose stem1â â denotes the string concatenation operator.
Citation Sentence:
This has been reported for other languages , too , dependent on the generality of the chosen approach ( J ¨ appinen and Niemist ¨ o , 1988 ; Choueka , 1990 ; Popovic and Willett , 1992 ; Ekmekc ¸ ioglu et al. , 1995 ; Hedlund et al. , 2001 ; Pirkola , 2001 ) .
Context after the citation:
When it comes to a broader scope of morphological analysis, including derivation and composition, even for the English language only restricted, domain-specific algorithms exist. This is particularly true for the medical domain. From an IR view, a lot of specialized research has already been carried out for medical applications, with emphasis on the lexico-semantic aspects of dederivation and decomposition (Pacak et al., 1980; Norton and Pacak, 1983; Wolff, 1984; Wingert, 1985; Dujols et al., 1991; Baud et al., 1998). While one may argue that single-word compounds are quite rare in English (which is not the case in the medical domain either), this is certainly not true for German and other basically agglutinative languages known for excessive single-word nominal compounding. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1499 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
None
Context before the citation:
each relevant document is retrieved (Baeza-Yates and Ribeiro-Neto 1999). ⢠Mean Average Precision (MAP) is the average of precision values after ⢠Precision at ten retrieved documents (P10) measures the fraction of relevant documents in the top ten results.
Citation Sentence:
each relevant document is retrieved ( Baeza-Yates and Ribeiro-Neto 1999 ) .
Context after the citation:
It is the most widely accepted single-value metric in information retrieval, and is seen to balance the need for both precision and recall. ⢠Mean Reciprocal Rank (MRR) is a measure of how far down a hit list the user must browse before encountering the first relevant result. The score is equal to the reciprocal of the rank, that is, a relevant document at rank 1 gets a score of 1, 1/2 at rank 2, 1/3 at rank 3, and so on. Note that this measure only captures the appearance of the first relevant document. | Uses | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:15 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
conclusion
Context before the citation:
Compared to the reranking technique in Collins (2000), who obtained an LP of 89.9% and an LR of 89.6%, our results show a 9% relative error rate reduction. This is roughly an 11% relative reduction in error rate over Charniak (2000) and Bods PCFG-reduction reported in Table 1. The highest accuracy is obtained by SL-DOP at 12 n 14: an LP of 90.8% and an LR of 90.7%.
Citation Sentence:
Compared to the reranking technique in Collins ( 2000 ) , who obtained an LP of 89.9 % and an LR of 89.6 % , our results show a 9 % relative error rate reduction .
Context after the citation:
While SL-DOP and LS-DOP have been compared before in Bod (2002), especially in the context of musical parsing, this paper presents the The DOP approach is based on two distinctive features: (1) the use of corpus fragments rather than grammar rules, and (2) the use of arbitrarily large fragments rather than restricted ones. While the first feature has been generally adopted in statistical NLP, the second feature has for a long time been a serious bottleneck, as it results in exponential processing time when the most probable parse tree is computed. This paper showed that a PCFG-reduction of DOP in combination with a new notion of the best parse tree results in fast processing times and very competitive accuracy on the Wall Street Journal treebank. This paper also re-affirmed that the coarsegrained approach of using all subtrees from a treebank outperforms the fine-grained approach of specifically modeling lexical-syntactic depen dencies (as e.g. in Collins 1999 and Charniak 2000). | CompareOrContrast | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:150 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
introduction
Context before the citation:
We offer a theorem that highlights the broad applicability of these modeling techniques.4 If f(input, output) is a weighted regular relation, then the following statements are equivalent: (1) f is a joint probabilistic relation; (2) f can be computed by a Markovian FST that halts with probability 1; (3) f can be expressed as a probabilistic regexp, i.e., a regexp built up from atomic expressions a : b (for a E E U {E}, b E A U {E}) using concatenation, probabilistic union +p, and probabilistic closure *p. For defining conditional relations, a good regexp language is unknown to us, but they can be defined in several other ways: (1) via FSTs as in Fig. 1c, (2) by compilation of weighted rewrite rules (Mohri and Sproat, 1996), (3) by compilation of decision trees (Sproat and Riley, 1996), (4) as a relation that performs contextual left-to-right replacement of input substrings by a smaller conditional relation (Gerdemann and van Noord, 1999),5 (5) by conditionalization of a joint relation as discussed below. Each of these probabilities in turn affects multiple arcs in the composed FST of Fig. 1a. These 4 parameters have global effects on Fig. 1a, thanks to complex parameter tying: arcs ® b:p â) @, ® b:q â) ® in Fig. 1b get respective probabilities (1 â A)µν and (1 â µ)ν, which covary with ν and vary oppositely with µ.
Citation Sentence:
We offer a theorem that highlights the broad applicability of these modeling techniques .4 If f ( input , output ) is a weighted regular relation , then the following statements are equivalent : ( 1 ) f is a joint probabilistic relation ; ( 2 ) f can be computed by a Markovian FST that halts with probability 1 ; ( 3 ) f can be expressed as a probabilistic regexp , i.e. , a regexp built up from atomic expressions a : b ( for a E E U -LCB- E -RCB- , b E A U -LCB- E -RCB- ) using concatenation , probabilistic union + p , and probabilistic closure * p. For defining conditional relations , a good regexp language is unknown to us , but they can be defined in several other ways : ( 1 ) via FSTs as in Fig. 1c , ( 2 ) by compilation of weighted rewrite rules ( Mohri and Sproat , 1996 ) , ( 3 ) by compilation of decision trees ( Sproat and Riley , 1996 ) , ( 4 ) as a relation that performs contextual left-to-right replacement of input substrings by a smaller conditional relation ( Gerdemann and van Noord , 1999 ) ,5 ( 5 ) by conditionalization of a joint relation as discussed below .
Context after the citation:
A central technique is to define a joint relation as a noisy-channel model, by composing a joint relation with a cascade of one or more conditional relations as in Fig. 1 (Pereira and Riley, 1997; Knight and Graehl, 1998). The general form is illustrated by 3Conceptually, the parameters represent the probabilities of reading another a (A); reading another b (ν); transducing b to p rather than q (µ); starting to transduce p to a rather than x (p). 4To prove (1)â(3), express f as an FST and apply the well-known Kleene-Sch¨utzenberger construction (Berstel and Reutenauer, 1988), taking care to write each regexp in the construction as a constant times a probabilistic regexp. A full proof is straightforward, as are proofs of (3)â(2), (2)â(1). | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1500 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
method
Context before the citation:
We use the Clark and Curran (2007) CCG parser to analyse the sentence before and after paraphrasing. In order to improve the grammaticality checking, we use a parser as an addition to the basic Google ngram method.
Citation Sentence:
We use the Clark and Curran ( 2007 ) CCG parser to analyse the sentence before and after paraphrasing .
Context after the citation:
Combinatory Categorial Grammar (CCG) is a lexicalised grammar formalism, in which CCG lexical categories â typically expressing subcategorisation information â are assigned to each word in a sentence. The grammatical check works by checking if the words in the sentence outside of the phrase and paraphrase receive the same lexical categories before and after paraphrasing. If there is any change in lexical category assignment to these words then the paraphrase is judged ungrammatical. Hence the grammar check is at the word, rather than derivation, level; however, CCG lexical categories contain a large amount of syntactic information which this method is able to exploit. | Uses | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1501 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
experiments
Context before the citation:
For example, consider a relational description (cfXXX, Dale and Haddock 1991) involving a gradable adjective, as in the dog in the large shed. Some generalizations of our method are fairly straightforward. Extensions of the Approach 9.1 Relational Descriptions
Citation Sentence:
For example , consider a relational description ( cfXXX , Dale and Haddock 1991 ) involving a gradable adjective , as in the dog in the large shed .
Context after the citation:
CD for this type of descriptions along the lines of Section 4 is not difficult once relational descriptions are integrated with a standard GRE algorithm (Krahmer and Theune 2002, Section 8.6.2): Suppose an initial description is generated describing the set of all those dogs that are in sheds over a given size (say, size 5); if this description happens to distinguish an individual dog then this legitimizes the use of the noun phrase the dog in the large shed. Note that this is felicitous even if the shed is not the largest one in the domain, as is true for d2 in the following situation (contains-a=b means that a is contained by b): In other words, the dog in the large shed denotes âthe dog such that there is no other shed that is equally large or larger and that contains a dogâ. Note that it would be odd, in the above-sketched situation, to say the dog in the largest shed. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1502 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
introduction
Context before the citation:
Previous versions of our work, as described in Bachenko et al. (1986) also assume that phrasing is dependent on predicate-argument structure. Selkirk (1984) and Nespor and Vogel (1986) take a similar approach, but within a different theoretical framework. Crystal (1969) claims that prosodic phrase boundaries will co-occur with grammatical functions such as subject, predicate, modifier, and adjunct.
Citation Sentence:
Previous versions of our work , as described in Bachenko et al. ( 1986 ) also assume that phrasing is dependent on predicate-argument structure .
Context after the citation:
The problem here is that the phrasing in observed data often ignores the argument status of constituents. In 17aâf, for example, the phrasing makes no distinction between arguments and adjuncts. All of the sentences have the same X(VY) pattern even though Y is a complement in the first case (the first serious attempt) and an adjunct in the others. (The complement in 17a and the adjuncts in 17bâf are italicized.) | Extends | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1503 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
experiments
Context before the citation:
like information extraction (Yates and Etzioni, 2009) and textual entailment (Berant et al., 2010). Furthermore, if we use a labeled-ALS as the metric for augmented-loss training, we also see a considerable increase in LAS. When training with ALS (labeled and unlabeled), we see an improvement in UAS, LAS, and ALS.
Citation Sentence:
like information extraction ( Yates and Etzioni , 2009 ) and textual entailment ( Berant et al. , 2010 ) .
Context after the citation:
In Table 3 we show results for parsing with the ALS augmented-loss objective. For each parser, we consider two different ALS objective functions; one based on unlabeled-ALS and the other on labeledALS. The arc-length score penalizes incorrect longdistance dependencies more than local dependencies; long-distance dependencies are often more destructive in preserving sentence meaning and can be more difficult to predict correctly due to the larger context on which they depend. Combining this with the standard attachment scores biases training to focus on the difficult head dependencies. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1504 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
None
Context before the citation:
The inclusion of the coreference task in the Sixth and Seventh Message Understanding Conferences (MUC-6 and MUC-7) gave a considerable impetus to the development of coreference resolution algorithms and systems, such as those described in Baldwin et al. (1995), Gaizauskas and Humphreys (1996), and Kameyama (1997). While the shift toward knowledge-poor strategies and the use of corpora represented the main trends of anaphora resolution in the 1990s, there are other significant highlights in recent anaphora resolution research. From simple co-occurrence rules (Dagan and Itai 1990) through training decision trees to identify anaphor-antecedent pairs (Aone and Bennett 1995) to genetic algorithms to optimize the resolution factors (Ordsan, Evans, and Mitkov 2000), the successful performance of more and more modern approaches was made possible by the availability of suitable corpora.
Citation Sentence:
The inclusion of the coreference task in the Sixth and Seventh Message Understanding Conferences ( MUC-6 and MUC-7 ) gave a considerable impetus to the development of coreference resolution algorithms and systems , such as those described in Baldwin et al. ( 1995 ) , Gaizauskas and Humphreys ( 1996 ) , and Kameyama ( 1997 ) .
Context after the citation:
The last decade of the 20th century saw a number of anaphora resolution projects for languages other than English such as French, German, Japanese, Spanish, Portuguese, and Turkish. Against the background of a growing interest in multilingual NLP, multilingual anaphora /coreference resolution has gained considerable momentum in recent years (Aone and McKee 1993; Azzam, Humphreys, and Gaizauskas 1998; Harabagiu and Maiorano 2000; Mitkov and Barbu 2000; Mitkov 1999; Mitkov and Stys 1997; Mitkov, Belguith, and Stys 1998). Other milestones of recent research include the deployment of probabilistic and machine learning techniques (Aone and Bennett 1995; Kehler 1997; Ge, Hale, and Charniak 1998; Cardie and Wagstaff 1999; the continuing interest in centering, used either in original or in revised form (Abracos and Lopes 1994; Strube and Hahn 1996; Hahn and Strube 1997; Tetreault 1999); and proposals related to the evaluation methodology in anaphora resolution (Mitkov 1998a, 2001b). For a more detailed survey of the state of the art in anaphora resolution, see Mitkov (forthcoming). | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1505 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
related work
Context before the citation:
Lee et al. (2012) model entity coreference and event coreference jointly; Durrett and Klein (2014) consider joint coreference and entity-linking. Several recent works suggest studying coreference jointly with other tasks. Our joint framework provides similar insights, where the added mention decision variable partly reflects if the mention is singleton or not.
Citation Sentence:
Lee et al. ( 2012 ) model entity coreference and event coreference jointly ; Durrett and Klein ( 2014 ) consider joint coreference and entity-linking .
Context after the citation:
The work closest to ours is that of Lassalle and Denis (2015), which studies a joint anaphoricity detection and coreference resolution framework. While their inference objective is similar, their work assumes gold mentions are given and thus their modeling is very different. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1506 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
introduction
Context before the citation:
However, learning-based resolvers have not been able to benefit from having an SC agreement feature, presumably because the method used to compute the SC of an NP is too simplistic: while the SC of a proper name is computed fairly accurately using a named entity (NE) recognizer, many resolvers simply assign to a common noun the first (i.e., most frequent) WordNet sense as its SC (e.g., Soon et al. (2001), Markert and Nissim (2005)). Another type of semantic knowledge that has been employed by coreference resolvers is the semantic class (SC) of an NP, which can be used to disallow coreference between semantically incompatible NPs. As a result, researchers have re-adopted the once-popular knowledge-rich approach, investigating a variety of semantic knowledge sources for common noun resolution, such as the semantic relations between two NPs (e.g., Ji et al. (2005)), their semantic similarity as computed using WordNet (e.g., Poesio et al. (2004)) or Wikipedia (Ponzetto and Strube, 2006), and the contextual role played by an NP (see Bean and Riloff (2004)).
Citation Sentence:
However , learning-based resolvers have not been able to benefit from having an SC agreement feature , presumably because the method used to compute the SC of an NP is too simplistic : while the SC of a proper name is computed fairly accurately using a named entity ( NE ) recognizer , many resolvers simply assign to a common noun the first ( i.e. , most frequent ) WordNet sense as its SC ( e.g. , Soon et al. ( 2001 ) , Markert and Nissim ( 2005 ) ) .
Context after the citation:
It is not easy to measure the accuracy of this heuristic, but the fact that the SC agreement feature is not used by Soon et al.âs decision tree coreference classifier seems to suggest that the SC values of the NPs are not computed accurately by this first-sense heuristic. Motivated in part by this observation, we examine whether automatically induced semantic class knowledge can improve the performance of a learning-based coreference resolver, reporting evaluation results on the commonly-used ACE corefer- ence corpus. Our investigation proceeds as follows. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1507 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
introduction
Context before the citation:
This includes work on generalized expectation (Mann and McCallum, 2010), posterior regularization (Ganchev et al., 2010) and constraint driven learning (Chang et al., 2007; Chang et al., 2010). There have been a number of efforts to exploit weak or external signals of quality to train better prediction models. We call our algorithm augmented-loss training as it optimizes multiple losses to augment the traditional supervised parser loss.
Citation Sentence:
This includes work on generalized expectation ( Mann and McCallum , 2010 ) , posterior regularization ( Ganchev et al. , 2010 ) and constraint driven learning ( Chang et al. , 2007 ; Chang et al. , 2010 ) .
Context after the citation:
The work of Chang et al. (2007) on constraint driven learning is perhaps the closest to our framework and we draw connections to it in Section 5. In these studies the typical goal is to use the weak signal to improve the structured prediction models on the intrinsic evaluation metrics. For our setting this would mean using weak application specific signals to improve dependency parsing. Though we explore such ideas in our experiments, in particular for semi-supervised domain adaptation, we are primarily interested in the case where the weak signal is precisely what we wish to optimize, but also desire the benefit from using both data with annotated parse structures and data specific to the task at hand to guide parser training. | CompareOrContrast | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1508 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
None
Context before the citation:
The final machine is a trigram language model, specifically a Kneser-Ney (Chen and Goodman, 1998) based backoff language model. The second machine is a dictionary that accepts characters and produces identifiers corresponding to dictionary entries. The first machine, illustrated in Figure 1 encodes the prefix and suffix expansion rules, producing a lattice of possible segmentations.
Citation Sentence:
The final machine is a trigram language model , specifically a Kneser-Ney ( Chen and Goodman , 1998 ) based backoff language model .
Context after the citation:
Differing from (Lee et al., 2003), we have also introduced an explicit model for un'As an example, we do not chain mentions with different gender, number, etc. known words based upon a character unigram model, although this model is dominated by an empirically chosen unknown word penalty. Using 0.5M words from the combined Arabic Treebanks 1 �2, 2V2 and 3 �1, the dictionary based segmenter achieves a exact word match 97.8% correct segmentation. | Uses | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1509 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
None
Context before the citation:
The paper compares and contrasts the training time needed and performance achieved by our modified learner with two other systems: a standard transformation-based learner, and the ICA system (Hepple, 2000). In this paper, we present a novel and realistic method for speeding up the training time of a transformation-based learner without sacrificing performance. However, it does have a serious drawback: the training time is often intorelably long, especially on the large corpora which are often used in NLP.
Citation Sentence:
The paper compares and contrasts the training time needed and performance achieved by our modified learner with two other systems : a standard transformation-based learner , and the ICA system ( Hepple , 2000 ) .
Context after the citation:
The results of these experiments show that our system is able to achieve a significant improvement in training time while still achieving the same performance as a standard transformation-based learner. This is a valuable contribution to systems and algorithms which utilize transformation-based learning at any part of the execution. | CompareOrContrast | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:151 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
None
Context before the citation:
Hovy has described another text planner that builds similar plans (Hovy 1988b). In IGEN, the plans can involve any goals or actions that could be achieved via communication. Their planner uses plan structures similar to IGEN's, except that the plan operators they use are generally instantiations of rhetorical relations drawn from Rhetorical Structure Theory (Mann and Thompson 1987).
Citation Sentence:
Hovy has described another text planner that builds similar plans ( Hovy 1988b ) .
Context after the citation:
This system, however, starts with a list of information to be expressed and merely arranges it into a coherent pattern; it is thus not a planner in the sense used here (as Hovy makes clear). 10 Since text planning was not the primary focus of this work, IGEN is designed to simply assume that any false preconditions are unattainable. IGEN's planner divides the requirements of a plan into two parts: the preconditions, which are not planned for, and those in the plan body, which are. This has no | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1510 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
introduction
Context before the citation:
In particular, since we treat each individual speech within a debate as a single âdocumentâ, we are considering a version of document-level sentiment-polarity classification, namely, automatically distinguishing between positive and negative documents (Das and Chen, 2001; Pang et al., 2002; Turney, 2002; Dave et al., 2003). Task properties Determining whether or not a speaker supports a proposal falls within the realm of sentiment analysis, an extremely active research area devoted to the computational treatment of subjective or opinion-oriented language (early work includes Wiebe and Rapaport (1988), Hearst (1992), Sack (1994), and Wiebe (1994); see Esuli (2006) for an active bibliography). Note that from an experimental point of view, this is a very convenient problem to work with because we can automatically determine ground truth (and thus avoid the need for manual annotation) simply by consulting publicly available voting records.
Citation Sentence:
In particular , since we treat each individual speech within a debate as a single `` document '' , we are considering a version of document-level sentiment-polarity classification , namely , automatically distinguishing between positive and negative documents ( Das and Chen , 2001 ; Pang et al. , 2002 ; Turney , 2002 ; Dave et al. , 2003 ) .
Context after the citation:
Most sentiment-polarity classifiers proposed in the recent literature categorize each document independently. A few others incorporate various measures of inter-document similarity between the texts to be labeled (Agarwal and Bhattacharyya, 2005; Pang and Lee, 2005; Goldberg and Zhu, 2006). Many interesting opinion-oriented documents, however, can be linked through certain relationships that occur in the context of evaluative discussions. For example, we may find textual4 evidence of a high likelihood of agreement be4Because we are most interested in techniques applicable across domains, we restrict consideration to NLP aspects of the problem, ignoring external problem-specific information. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1511 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
experiments
Context before the citation:
transition-based dependency parsing framework (Nivre, 2008) using an arc-eager transition strategy and are trained using the perceptron algorithm as in Zhang and Clark (2008) with a beam size of 8. ⢠Transition-based: An implementation of the For our experiments we focus on two dependency parsers.
Citation Sentence:
transition-based dependency parsing framework ( Nivre , 2008 ) using an arc-eager transition strategy and are trained using the perceptron algorithm as in Zhang and Clark ( 2008 ) with a beam size of 8 .
Context after the citation:
Beams with varying sizes can be used to produce k-best lists. The features used by all models are: the part-ofspeech tags of the first four words on the buffer and of the top two words on the stack; the word identities of the first two words on the buffer and of the top word on the stack; the word identity of the syntactic head of the top word on the stack (if available); dependency arc label identities for the top word on the stack, the left and rightmost modifier of the top word on the stack, and the left most modifier of the first word in the buffer (if available). All feature conjunctions are included. ⢠Graph-based: An implementation of graph- | Uses | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1512 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
None
Context before the citation:
Just as easily, we can model link types that coincide with entries in an on-line bilingual dictionary separately from those that do not (cfXXX Brown et al. 1993). A kind of word order correlation bias can be effected by conditioning the auxiliary parameters on the relative positions of linked word tokens in their respective texts. Similarly, the auxiliary parameters can be conditioned on the linked parts of speech.
Citation Sentence:
Just as easily , we can model link types that coincide with entries in an on-line bilingual dictionary separately from those that do not ( cfXXX Brown et al. 1993 ) .
Context after the citation:
When the auxiliary parameters are conditioned on different link classes, their optimization is carried out separately for each class: B(links (u, v) Icooc(u, v), A-zF ) (37) scorec (u, viz = class (u, v)) = log B (links (u, v)lcooc(u, v), ) Section 6.1.1 describes the link classes used in the experiments below. | Uses | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1513 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
experiments
Context before the citation:
As an alternative, we rely on PubMed to retrieve an initial set of hits that we then postprocess in greater detailâthis is the standard pipeline architecture commonly employed in other question-answering systems (Voorhees and Tice 1999; Hirschman and Gaizauskas 2001). However, we do not have access to the computational resources necessary to apply knowledge extractors to the 15 million plus citations in the MEDLINE database and directly index their results. Ideally, we would like to match structured representations derived from the question with those derived from MEDLINE citations (taking into consideration other EBMrelevant factors).
Citation Sentence:
As an alternative , we rely on PubMed to retrieve an initial set of hits that we then postprocess in greater detail -- this is the standard pipeline architecture commonly employed in other question-answering systems ( Voorhees and Tice 1999 ; Hirschman and Gaizauskas 2001 ) .
Context after the citation:
The architecture of our system is shown in Figure 1. The query formulator is responsible for converting a clinical question (in the form of a query frame) into a PubMed search query. Presently, these queries are already encoded in our test collection (see Section 6). PubMed returns an initial list of MEDLINE citations, which is then analyzed by our knowledge extractors (see Section 5). | CompareOrContrast | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1514 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
related work
Context before the citation:
(Och and Ney, 2002; Blunsom et al., 2008) used maximum likelihood estimation to learn weights for MT. (Och, 2003; Moore and Quirk, 2008; Zhao and Chen, 2009; Galley and Quirk, 2011) employed an evaluation metric as a loss function and directly optimized it. Several works have proposed discriminative techniques to train log-linear model for SMT.
Citation Sentence:
( Och and Ney , 2002 ; Blunsom et al. , 2008 ) used maximum likelihood estimation to learn weights for MT. ( Och , 2003 ; Moore and Quirk , 2008 ; Zhao and Chen , 2009 ; Galley and Quirk , 2011 ) employed an evaluation metric as a loss function and directly optimized it .
Context after the citation:
(Watanabe et al., 2007; Chiang et al., 2008; Hopkins and May, 2011) proposed other optimization objectives by introducing a margin-based and ranking-based indirect loss functions. All the methods mentioned above train a single weight for the whole development set, whereas our local training method learns a weight for each sentence. Further, our translation framework integrates the training and testing into one unit, instead of treating them separately. One of the advantages is that it can adapt the weights for each of the test sentences. | CompareOrContrast | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1515 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
introduction
Context before the citation:
We also compare the results with the output generated by the statistical translation system GIZA++/ISI ReWrite Decoder (AlOnaizan et al., 1999; Och and Ney, 2000; Germann et al., 2001), trained on the same parallel corpus. Section 8 compares translations generated from automatically built and manually annotated tectogrammatical representations. For the evaluation of the results we use the BLEU score (Papineni et al., 2001).
Citation Sentence:
We also compare the results with the output generated by the statistical translation system GIZA + + / ISI ReWrite Decoder ( AlOnaizan et al. , 1999 ; Och and Ney , 2000 ; Germann et al. , 2001 ) , trained on the same parallel corpus .
Context after the citation: | CompareOrContrast | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1516 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
introduction
Context before the citation:
Some efforts have tackled tasks such as automatic image caption generation (Feng and Lapata, 2010a; Ordonez et al., 2011), text illustration (Joshi et al., 2006), or automatic location identification of Twitter users (Eisenstein et al., 2010; Wing and Baldridge, 2011; Roller et al., 2012). Others provide automatic mappings of natural language instructions to executable actions, such as interpreting navigation directions (Chen and Mooney, 2011) or robot commands (Tellex et al., 2011; Matuszek et al., 2012). Some approaches apply semantic parsing, where words and sentences are mapped to logical structure meaning (Kate and Mooney, 2007).
Citation Sentence:
Some efforts have tackled tasks such as automatic image caption generation ( Feng and Lapata , 2010a ; Ordonez et al. , 2011 ) , text illustration ( Joshi et al. , 2006 ) , or automatic location identification of Twitter users ( Eisenstein et al. , 2010 ; Wing and Baldridge , 2011 ; Roller et al. , 2012 ) .
Context after the citation:
Another line of research approaches grounded language knowledge by augmenting distributional approaches of word meaning with perceptual information (Andrews et al., 2009; Steyvers, 2010; Feng and Lapata, 2010b; Bruni et al., 2011; Silberer and Lapata, 2012; Johns and Jones, 2012; Bruni et al., 2012a; Bruni et al., 2012b; Silberer et al., 2013). Although these approaches have differed in model definition, the general goal in this line of research has been to enhance word meaning with perceptual information in order to address one of the most common criticisms of distributional semantics: that the âmeaning of words is entirely given by other wordsâ (Bruni et al., 2012b). In this paper, we explore various ways to integrate new perceptual information through novel computational modeling of this grounded knowledge into a multimodal distributional model of word meaning. The model we rely on was originally developed by | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1517 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
method
Context before the citation:
Since we are not generating from the model, this does not introduce difficulties (Klein and Manning, 2002). Like CCM, this model is deficient since the same supertags are generated multiple times, and parses with conflicting supertags are not valid. Finally, since it is possible to generate a supertag context category that does not match the actual category generated by the neighboring constituent, we must allow our process to reject such invalid trees and re-attempt to sample.
Citation Sentence:
Since we are not generating from the model , this does not introduce difficulties ( Klein and Manning , 2002 ) .
Context after the citation:
One additional complication that must be addressed is that left-frontier non-terminal categories â those whose subtree span includes the first word of the sentence â do not have a left-side supertag to use as context. For these cases, we use the special sentence-start symbol (S) to serve as context. Similarly, we use the end symbol (E) for the right-side context of the right-frontier. We next discuss how the prior distributions are constructed to encode desirable biases, using universal CCG properties. | Motivation | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1518 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
related work
Context before the citation:
The extraction procedure consists of three steps: First, the bracketing of the trees in the Penn Treebank is corrected and extended based on the approaches of Magerman (1994) and Collins (1997). Xia (1999) also presents a similar method for the extraction of a TAG from the Penn Treebank. The number of frame types extracted (i.e., an elementary tree without a specific lexical anchor) ranged from 2,366 to 8,996.
Citation Sentence:
The extraction procedure consists of three steps : First , the bracketing of the trees in the Penn Treebank is corrected and extended based on the approaches of Magerman ( 1994 ) and Collins ( 1997 ) .
Context after the citation:
Then the elementary trees are read off in a quite straightforward manner. Finally any invalid elementary trees produced as a result of annotation errors in the treebank are filtered out using linguistic heuristics. The number of frame types extracted by Xia (1999) ranged from 3,014 to 6,099. Hockenmaier, Bierner, and Baldridge (2004) outline a method for the automatic extraction of a large syntactic CCG lexicon from the Penn-II Treebank. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1519 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
related work
Context before the citation:
There has also been work focused upon determining the political leaning (e.g., âliberalâ vs. âconservativeâ) of a document or author, where most previously-proposed methods make no direct use of relationships between the documents to be classified (the âunlabeledâ texts) (Laver et al., 2003; Efron, 2004; Mullen and Malouf, 2006). Politically-oriented text Sentiment analysis has specifically been proposed as a key enabling technology in eRulemaking, allowing the automatic analysis of the opinions that people submit (Shulman et al., 2005; Cardie et al., 2006; Kwon et al., 2006).
Citation Sentence:
There has also been work focused upon determining the political leaning ( e.g. , `` liberal '' vs. `` conservative '' ) of a document or author , where most previously-proposed methods make no direct use of relationships between the documents to be classified ( the `` unlabeled '' texts ) ( Laver et al. , 2003 ; Efron , 2004 ; Mullen and Malouf , 2006 ) .
Context after the citation:
An exception is Grefenstette et al. (2004), who experimented with determining the political orientation of websites essentially by classifying the concatenation of all the documents found on that site. Others have applied the NLP technologies of near-duplicate detection and topic-based text categorization to politically oriented text (Yang and Callan, 2005; Purpura and Hillard, 2006). Detecting agreement We used a simple method to learn to identify cross-speaker references indicating agreement. More sophisticated approaches have been proposed (Hillard et al., 2003), including an extension that, in an interesting reversal of our problem, makes use of sentimentpolarity indicators within speech segments (Galley et al., 2004). | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:152 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
experiments
Context before the citation:
As they are required to enable test subjects to distinguish between senses, we use artificial glosses composed from synonyms and hypernyms as a surrogate, e.g. for brother: âbrother, male siblingâ vs. âbrother, comrade, friendâ (Gurevych, 2005). GermaNet contains only a few conceptual glosses. It is the most complete resource of this type for German.
Citation Sentence:
As they are required to enable test subjects to distinguish between senses , we use artificial glosses composed from synonyms and hypernyms as a surrogate , e.g. for brother : `` brother , male sibling '' vs. `` brother , comrade , friend '' ( Gurevych , 2005 ) .
Context after the citation:
We removed words which had more than three senses. Marginal manual post-processing was necessary, since the lemmatization process introduced some errors. Foreign words were translated into German, unless they are common technical terminology. We initially selected 100 word pairs from each corpus. | Uses | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1520 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
introduction
Context before the citation:
Over the last decade there has been a lot of interest in developing tutorial dialogue systems that understand student explanations (Jordan et al., 2006; Graesser et al., 1999; Aleven et al., 2001; Buckley and Wolska, 2007; Nielsen et al., 2008; VanLehn et al., 2007), because high percentages of selfexplanation and student contentful talk are known to be correlated with better learning in humanhuman tutoring (Chi et al., 1994; Litman et al., 2009; Purandare and Litman, 2008; Steinhauser et al., 2007).
Citation Sentence:
Over the last decade there has been a lot of interest in developing tutorial dialogue systems that understand student explanations ( Jordan et al. , 2006 ; Graesser et al. , 1999 ; Aleven et al. , 2001 ; Buckley and Wolska , 2007 ; Nielsen et al. , 2008 ; VanLehn et al. , 2007 ) , because high percentages of selfexplanation and student contentful talk are known to be correlated with better learning in humanhuman tutoring ( Chi et al. , 1994 ; Litman et al. , 2009 ; Purandare and Litman , 2008 ; Steinhauser et al. , 2007 ) .
Context after the citation:
However, most existing systems use pre-authored tutor responses for addressing student errors. The advantage of this approach is that tutors can devise remediation dialogues that are highly tailored to specific misconceptions many students share, providing step-by-step scaffolding and potentially suggesting additional problems. The disadvantage is a lack of adaptivity and generality: students often get the same remediation for the same error regardless of their past performance or dialogue context, as it is infeasible to author a different remediation dialogue for every possible dialogue state. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1521 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
method
Context before the citation:
Dalrymple (2001) argues that there are cases, albeit exceptional ones, in which constraints on syntactic category are an issue in subcategorization. As a further extension, the extraction procedure reads off the syntactic category of the head of each of the subcategorized syntactic functions: impose(v,[subj(n),obj(n),obl:on]).3 In this way, our methodology is able to produce surface syntactic as well as abstract functional subcategorization details. For some of our experiments, we conflate the different verbal (and other) tags used in the Penn Treebanks to a single verbal marker (Table 4).
Citation Sentence:
Dalrymple ( 2001 ) argues that there are cases , albeit exceptional ones , in which constraints on syntactic category are an issue in subcategorization .
Context after the citation:
In contrast to much of the work reviewed in Section 3, which limits itself to the extraction of surface syntactic subcategorization details, our system can provide this information as well as details of grammatical function. 3 We do not associate syntactic categories with OBLs as they are always PPs. Another way in which we develop and extend the basic extraction algorithm is to deal with passive voice and its effect on subcategorization behavior. Consider Figure 5: Not taking into account that the example sentence is a passive construction, the extraction algorithm extracts outlaw([subj]). | Motivation | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1522 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
experiments
Context before the citation:
Corpus frequency: (Vosse, 1992) differentiates between misspellings and neologisms (new words) in terms of their frequency. An abridged list of the features that are used in the training data is listed in Table 2 and discussed below. The features we use are derived from previous research, including our own previous research on misspelling identification.
Citation Sentence:
Corpus frequency : ( Vosse , 1992 ) differentiates between misspellings and neologisms ( new words ) in terms of their frequency .
Context after the citation:
His algorithm classifies unknown words that appear infrequently as misspellings, and those that appear more frequently as neologisms. Our corpus frequency variable specifies the frequency of each unknown word in a 2.6 million word corpus of business news closed captions. Word Length: (Agirre et al., 1998) note that their predictions for the correct spelling of misspelled words are more accurate for words longer than four characters, and much less accurate for shorter words. This observation can also be found in (Kukich, 1992). | Uses | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1523 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
experiments
Context before the citation:
We use an in-house developed hierarchical phrase-based translation (Chiang, 2005) as our baseline system, and we denote it as In-Hiero. The significance testing is performed by paired bootstrap re-sampling (Koehn, 2004). In our experiments the translation performances are measured by case-insensitive BLEU4 metric (Papineni et al., 2002) and we use mtevalv13a.pl as the evaluation tool.
Citation Sentence:
We use an in-house developed hierarchical phrase-based translation ( Chiang , 2005 ) as our baseline system , and we denote it as In-Hiero .
Context after the citation:
To obtain satisfactory baseline performance, we tune InHiero system for 5 times using MERT, and then se- (MERT). NIST05 is the set used to tune A for MBUU and EBUU, and NIST06 and NIST08 are test sets. + means the local method is significantly better than MERT with p < 0.05. | Uses | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1524 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
experiments
Context before the citation:
The types of sentences accepted are essentially those accepted by the original NLC grammar, imperative sentences with nested noun groups and conjunctions (Ballard 1979). Its strategy is top-down. The expectation parser uses an ATN-like representation for its grammar (Woods 1970).
Citation Sentence:
The types of sentences accepted are essentially those accepted by the original NLC grammar , imperative sentences with nested noun groups and conjunctions ( Ballard 1979 ) .
Context after the citation:
An attempt has been made to build as deep a parse as possible so that sentences with the same meaning result in identical parses. Sentences have the same "meaning" if they result in identical tasks being performed. The various sentence structures that have the same meaning we call paraphrases. We have studied the following types of paraphrasing: | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1525 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
experiments
Context before the citation:
The task we used to compare different generalisation techniques is similar to that used by Pereira et al. (1993) and Rooth et al. (1999).
Citation Sentence:
The task we used to compare different generalisation techniques is similar to that used by Pereira et al. ( 1993 ) and Rooth et al. ( 1999 ) .
Context after the citation:
The task is to decide which of two verbs, v and vi, is more likely to take a given noun, n, as an object. The test and training data were obtained as follows. A number of verb direct object pairs were extracted from a subset of the BNC, using the system of Briscoe and Carroll. All those pairs containing a noun not in WordNet were removed, and each verb and argument was lemmatised. | CompareOrContrast | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1526 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
None
Context before the citation:
One of the proposed methods to extract paraphrases relies on a pivot-based approach using phrase alignments in a bilingual parallel corpus (Bannard and Callison-Burch, 2005). They proved to be useful in a number of NLP applications such as natural language generation (Iordanskaja et al., 1991), multidocument summarization (McKeown et al., 2002), automatic evaluation of MT (Denkowski and Lavie, 2010), and TE (Dinu and Wang, 2009). Paraphrase tables (PPHT) contain pairs of corresponding phrases in the same language, possibly associated with probabilities.
Citation Sentence:
One of the proposed methods to extract paraphrases relies on a pivot-based approach using phrase alignments in a bilingual parallel corpus ( Bannard and Callison-Burch , 2005 ) .
Context after the citation:
With this method, all the different phrases in one language that are aligned with the same phrase in the other language are extracted as paraphrases. After the extraction, pruning techniques (Snover et al., 2009) can be applied to increase the precision of the extracted paraphrases. In our work we used available2 paraphrase databases for English and Spanish which have been extracted using the method previously outlined. Moreover, in order to experiment with different paraphrase sets providing different degrees of coverage and precision, we pruned the main paraphrase table based on the probabilities, associated to its entries, of 0.1, 0.2 and 0.3. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1527 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
experiments
Context before the citation:
The Nash arbitration plan, for example, would allow a doubly graded description whenever the product of the Values for the referent r exceeds that of all distractors (Nash 1950; cfXXX Gorniak and Roy 2003; Thorisson 1994, for other plans). Many alternative strategies are possible. For example, if the example is modified by letting width(a) = 3.1 m, making a slightly fatter than b, then b might still be the only reasonable referent of the tall fat giraffe.
Citation Sentence:
The Nash arbitration plan , for example , would allow a doubly graded description whenever the product of the Values for the referent r exceeds that of all distractors ( Nash 1950 ; cfXXX Gorniak and Roy 2003 ; Thorisson 1994 , for other plans ) .
Context after the citation:
9.3.2 Multidimensional Adjectives (and Color). Multidimensionality can also slip in through the backdoor. Consider big, for example, when applied to 3D shapes. If there exists a formula for mapping three dimensions into one (e.g., length x width x height) then the result is one dimension (overall-size), and the algorithm of Section 4 can be applied verbatim. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1528 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
related work
Context before the citation:
Turney (2001) extracts word co-occurrence probabilities from unlabelled text collected from a web crawler. Studies have used several different methods to mine web data. The use of the web as a corpus for teaching and research on language has been proposed a number of times (Kilgarriff, 2001; Robb, 2003; Rundell, 2000; Fletcher, 2001, 2004b) and received a special issue of the journal Computational Linguistics (Kilgarriff and Grefenstette, 2003).
Citation Sentence:
Turney ( 2001 ) extracts word co-occurrence probabilities from unlabelled text collected from a web crawler .
Context after the citation:
Baroni and Bernardini (2004) built a corpus by iteratively searching Google for a small set of seed terms. Prototypes of Internet search engines for linguists, corpus linguists and lexicographers have been proposed: WebCorp (Kehoe and Renouf, 2002), KWiCFinder (Fletcher, 2004a) and the Linguistâs Search Engine (Kilgarriff, 2003; Resnik and Elkiss, 2003). A key concern in corpus linguistics and related disciplines is verifiability and replicability of the results of studies. Word frequency counts in internet search engines are inconsistent and unreliable (Veronis, 2005). | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1529 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
None
Context before the citation:
It projects a functional head, voice (Kratzer, 1994), whose specifier is the external argument. The light verb vDO licenses an atelic non-inchoative event, and is compatible with verbal roots expressing activity. (9) vDO [+dynamic, âinchoative] = DO vb [+dynamic, +inchoative] = BECOME vBE [âdynamic] = BE
Citation Sentence:
It projects a functional head , voice ( Kratzer , 1994 ) , whose specifier is the external argument .
Context after the citation:
(10) John ran. The entire voiceP is further embedded under a tense projection (not shown here), and the verbal complex undergoes head movement and left adjoins to any overt tense markings. Similarly, the external argument raises to [Spec, TP]. This is in accordance with modern linguistic theory, more specifically, the subject-internal hypothesis. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:153 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
related work
Context before the citation:
Notable early papers on graph-based semisupervised learning include Blum and Chawla (2001), Bansal et al. (2002), Kondor and Lafferty (2002), and Joachims (2003). inter-document references in the form of hyperlinks (Agrawal et al., 2003). Previous sentiment-analysis work in different domains has considered inter-document similarity (Agarwal and Bhattacharyya, 2005; Pang and Lee, 2005; Goldberg and Zhu, 2006) or explicit
Citation Sentence:
Notable early papers on graph-based semisupervised learning include Blum and Chawla ( 2001 ) , Bansal et al. ( 2002 ) , Kondor and Lafferty ( 2002 ) , and Joachims ( 2003 ) .
Context after the citation:
Zhu (2005) maintains a survey of this area. Recently, several alternative, often quite sophisticated approaches to collective classification have been proposed (Neville and Jensen, 2000; Lafferty et al., 2001; Getoor et al., 2002; Taskar et al., 2002; Taskar et al., 2003; Taskar et al., 2004; McCallum and Wellner, 2004). It would be interesting to investigate the application of such methods to our problem. However, we also believe that our approach has important advantages, including conceptual simplicity and the fact that it is based on an underlying optimization problem that is provably and in practice easy to solve. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1530 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
introduction
Context before the citation:
Due to this inherent ambiguity, manual annotations usually distinguish between sure correspondences for unambiguous translations, and possible, for ambiguous translations (Och and Ney 2003). There are many reasons why a simple word-to-word (1-to-1) correspondence is not possible for every sentence pair: for instance, auxiliary verbs used in one language but not the other (e.g., English He walked and French Il est all´e), articles required in one language but optional in the other (e.g., English Cars use gas and Portuguese Os carros usam gasolina), cases where the content is expressed using multiple words in one language and a single word in the other language (e.g., agglutination such as English weapons of mass destruction and German Massenvernichtungswaffen), and expressions translated indirectly. A word alignment for a parallel sentence pair represents the correspondence between words in a source language and their translations in a target language (Brown et al. 1993b).
Citation Sentence:
Due to this inherent ambiguity , manual annotations usually distinguish between sure correspondences for unambiguous translations , and possible , for ambiguous translations ( Och and Ney 2003 ) .
Context after the citation:
The top row of Figure 1 shows two word alignments between an EnglishâFrench sentence pair. We use the following notation: the alignment on the left (right) will be referenced as sourceâtarget (targetâsource) and contains source (target) words as rows and target (source) words as columns. Each entry in the matrix corresponds to a sourceâtarget word pair, and is the candidate for an alignment link. Sure links are represented as squares with borders, and possible links | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1531 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
None
Context before the citation:
Thus for instance, (Copestake and Flickinger, 2000; Copestake et al., 2001) describes a Head Driven Phrase Structure Grammar (HPSG) which supports the parallel construction of a phrase structure (or derived) tree and of a semantic representation and (Dalrymple, 1999) show how to equip Lexical Functional grammar (LFG) with a glue semantics. âSemantic grammarsâ already exist which describe not only the syntax but also the semantics of natural language.
Citation Sentence:
Thus for instance , ( Copestake and Flickinger , 2000 ; Copestake et al. , 2001 ) describes a Head Driven Phrase Structure Grammar ( HPSG ) which supports the parallel construction of a phrase structure ( or derived ) tree and of a semantic representation and ( Dalrymple , 1999 ) show how to equip Lexical Functional grammar ( LFG ) with a glue semantics .
Context after the citation:
These grammars are both efficient and large scale in that they cover an important fragment of the natural language they describe and can be processed by parsers and generators in almost real time. For instance, the LFG grammar parses sentences from the Wall Street Journal and the ERG HPSG grammar will produce semantic representations for about 83 per cent of the utterances in a corpus of some 10 000 utterances varying in length between one and thirty words. Parsing times vary between a few ms for short sentences and several tens of seconds for longer ones. Nonetheless, from a semantics viewpoint, these grammars fail to yield a clear account of the paraphrastic relation. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1532 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
experiments
Context before the citation:
The contextual interpreter then uses a reference resolution approach similar to Byron (2002), and an ontology mapping mechanism (Dzikovska et al., 2008a) to produce a domain-specific semantic representation of the studentâs output. The parser provides a domain-independent semantic representation including high-level word senses and semantic role labels. We use the TRIPS dialogue parser (Allen et al., 2007) to parse the utterances.
Citation Sentence:
The contextual interpreter then uses a reference resolution approach similar to Byron ( 2002 ) , and an ontology mapping mechanism ( Dzikovska et al. , 2008a ) to produce a domain-specific semantic representation of the student 's output .
Context after the citation:
Utterance content is represented as a set of extracted objects and relations between them. Negation is supported, together with a heuristic scoping algorithm. The interpreter also performs basic ellipsis resolution. For example, it can determine that in the answer to the question âWhich bulbs will be on and which bulbs will be off in this diagram?â | Uses | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1533 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
None
Context before the citation:
only the available five relative scopings of the quantifiers are produced (Hobbs and Shieber 1987, 47), but without the need for a free variable constraintâthe HOU algorithm will not produce any solutions in which a previously bound variable becomes free; ⢠the equivalences are reversible, and thus the above sentences cart be generated from scoped logical forms; ⢠partial scopings are permitted (see Reyle [19961) ⢠scoping can be freely interleaved with other types of reference resolution; ⢠unscoped or partially scoped forms are available for inference or for generation at every stage. ⢠in classic examples like: (21) Every representative in a company saw most samples. This is a rather oversimplified treatment of quantifier scope, which we will refine a little shortly, but even as it stands the treatment has several advantages:
Citation Sentence:
only the available five relative scopings of the quantifiers are produced ( Hobbs and Shieber 1987 , 47 ) , but without the need for a free variable constraint -- the HOU algorithm will not produce any solutions in which a previously bound variable becomes free ; ⢠the equivalences are reversible , and thus the above sentences cart be generated from scoped logical forms ; ⢠partial scopings are permitted ( see Reyle [ 19961 ) ⢠scoping can be freely interleaved with other types of reference resolution ; ⢠unscoped or partially scoped forms are available for inference or for generation at every stage .
Context after the citation: | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1534 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
None
Context before the citation:
In the transducers produced by the training method described in this paper, the source and target positions are in the set {-1, 0,1}, though we have also used handcoded transducers (Alshawi and Xia 1997) and automatically trained transducers (Alshawi and Douglas 2000) with a larger range of positions. (Formally, the function is partial in that it is not defined on an input when there are no derivations or when there are multiple outputs with the same minimal cost.) We can now define the string-to-string transduction function for a head transducer to be the function that maps an input string to the output string produced by the lowest-cost valid derivation taken over all initial states and initial symbols.
Citation Sentence:
In the transducers produced by the training method described in this paper , the source and target positions are in the set -LCB- -1 , 0,1 -RCB- , though we have also used handcoded transducers ( Alshawi and Xia 1997 ) and automatically trained transducers ( Alshawi and Douglas 2000 ) with a larger range of positions .
Context after the citation:
2.2 Relationship to Standard FSTs The operation of a traditional left-to-right transducer can be simulated by a head transducer by starting at the leftmost input symbol and setting the positions of the first transition taken to a = 0 and /3 = 0, and the positions for subsequent transitions to a = 1 and )3 = 1. However, we can illustrate the fact that head transducers are more Head transducer to reverse an input string of arbitrary length in the alphabet {a, b}. | Uses | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1535 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
related work
Context before the citation:
Similar approaches are being explored for parsing (Steedman, Hwa, et al. 2003; Hwa et al. 2003). Pierce and Cardie (2001) have shown, in the context of base noun identification, that combining sample selection and cotraining can be an effective learning framework for large-scale training. The work of Sarkar (2001) and Steedman, Osborne, et al. (2003) suggests that co-training can be helpful for statistical parsing.
Citation Sentence:
Similar approaches are being explored for parsing ( Steedman , Hwa , et al. 2003 ; Hwa et al. 2003 ) .
Context after the citation: | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1536 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
introduction
Context before the citation:
In particular, since we treat each individual speech within a debate as a single âdocumentâ, we are considering a version of document-level sentiment-polarity classification, namely, automatically distinguishing between positive and negative documents (Das and Chen, 2001; Pang et al., 2002; Turney, 2002; Dave et al., 2003). Task properties Determining whether or not a speaker supports a proposal falls within the realm of sentiment analysis, an extremely active research area devoted to the computational treatment of subjective or opinion-oriented language (early work includes Wiebe and Rapaport (1988), Hearst (1992), Sack (1994), and Wiebe (1994); see Esuli (2006) for an active bibliography). Note that from an experimental point of view, this is a very convenient problem to work with because we can automatically determine ground truth (and thus avoid the need for manual annotation) simply by consulting publicly available voting records.
Citation Sentence:
In particular , since we treat each individual speech within a debate as a single `` document '' , we are considering a version of document-level sentiment-polarity classification , namely , automatically distinguishing between positive and negative documents ( Das and Chen , 2001 ; Pang et al. , 2002 ; Turney , 2002 ; Dave et al. , 2003 ) .
Context after the citation:
Most sentiment-polarity classifiers proposed in the recent literature categorize each document independently. A few others incorporate various measures of inter-document similarity between the texts to be labeled (Agarwal and Bhattacharyya, 2005; Pang and Lee, 2005; Goldberg and Zhu, 2006). Many interesting opinion-oriented documents, however, can be linked through certain relationships that occur in the context of evaluative discussions. For example, we may find textual4 evidence of a high likelihood of agreement be4Because we are most interested in techniques applicable across domains, we restrict consideration to NLP aspects of the problem, ignoring external problem-specific information. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:1537 |