Datasets:

ArneBinder commited on
Commit
d97f676
·
1 Parent(s): 8c02587

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +168 -0
README.md ADDED
@@ -0,0 +1,168 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Dataset Card for "SciDTB Argmin"
2
+
3
+ ### Dataset Summary
4
+
5
+ Built from 60 English scientific abstracts in the larger annotated dataset, *Discourse Dependency TreeBank for Scientific Abstracts* ([Yang & Li, 2018](https://aclanthology.org/P18-2071)), [Accuosto and Saggion (2020)](https://aclanthology.org/W19-4505.pdf) offered a fine-grained annotated dataset for the argumentative component classification and relation classification tasks.
6
+
7
+ The dataset contains 327 sentences, 8012 tokens, 862 discourse units and 352 argumentative units (of 6 labels) linked by 292 argumentative relations (of 5 labels).
8
+
9
+ ### Supported Tasks and Leaderboards
10
+
11
+ - **Tasks:** Argument Mining, Component Classification, Relation Classification
12
+ - **Leaderboards:** \[More Information Needed\]
13
+
14
+ ### Languages
15
+
16
+ The language in the dataset is English (academic).
17
+
18
+ ## Dataset Structure
19
+
20
+ ### Data Instances
21
+
22
+ - **Size of downloaded dataset files:** 32.4 KB
23
+
24
+ ```
25
+ {
26
+ 'id': 'D14-1002-fexp-corpus',
27
+ 'data': {
28
+ 'token': [ "This", "paper", "presents", "a", "deep", "semantic",...]
29
+ 'unit-bio': [ 0, 1, 1, 1, 1,...]
30
+ 'unit-label': [ 0, 0, 0, 0, 0,...]
31
+ "role": [ 4, 4, 4, 4, 4,...]
32
+ 'parent-offset': [ 0, 0, 0, 0, 0,...]
33
+ }
34
+ }
35
+ ```
36
+
37
+ ### Data Fields
38
+
39
+ - `id`: the instance `id` of the document, a `string` feature
40
+ - `data`: a `dictionary` feature, contains:
41
+ - `token`: word tokens of the whole document, a `list` of `string` feature
42
+ - `unit-bio`: the BIO label indicating whether the token at a particular index is the beginning of a unit (labeled as 0) or not (labeled as 1), a `list` of `int` feature
43
+ - `unit-label`: the span label (which the token belongs to) indicating the argumentation type, a `list` of `int` feature (see [label list](https://huggingface.co/datasets/DFKI-SLT/scidtb_argmin/blob/main/scidtb_argmin.py#L42-50))
44
+ - `role`: the relation label (of the span which the token belongs to) indicating the argumentation relation to another span, a `list` of `int` feature (see [label list](https://huggingface.co/datasets/DFKI-SLT/scidtb_argmin/blob/main/scidtb_argmin.py#L51))
45
+ - `parent-offset`: the distance from the current span to the span it has a relation with (as indicated in `role`), a `list` of `int` feature
46
+
47
+ ### Data Splits
48
+
49
+ | | train |
50
+ | ------------------------------------------------------------------------------------------------------------------- | -----------------------------------------: |
51
+ | Size | 60 |
52
+ | Span Labels<br/>- `Proposal`<br/>- `Mean`<br/>- `Result`<br/>- `Observation`<br/> - `Assertion`<br/>- `Description` | <br/>110<br/>63<br/>74<br/>11<br/>88<br/>7 |
53
+ | Relation Labels<br/>- `Support`<br/>- `Attack`<br/>- `Detail`<br/>- `Sequence`<br/>- `Additional` | <br/>126<br/>0<br/>129<br/>11<br/>27 |
54
+
55
+ ## Dataset Creation
56
+
57
+ ### Curation Rationale
58
+
59
+ "We propose to tackle the limitations posed by the lack of annotated data for argument mining in the scientific domain by leveraging existing Rhetorical Structure Theory (RST) (Mann et al., 1992) annotations in a corpus of computational linguistics abstracts (SciDTB) (Yang and Li, 2018)." (p. 42)
60
+
61
+ "We introduce a fine-grained annotation scheme aimed at capturing information that accounts for the specificities of the scientific discourse, including the type of evidence that is offered to support a statement (e.g., background information, experimental data or interpretation of results). This can provide relevant information, for instance, to assess the argumentative strength of a text." (p. 44)
62
+
63
+ ### Source Data
64
+
65
+ The source data is available online at https://emnlp2014.org/.
66
+
67
+ #### Initial Data Collection and Normalization
68
+
69
+ "This work is informed by previous research in the areas of argument mining, argumentation quality assessment and the relationship between discourse and argumentative structures and, from the methodological perspective, to transfer learning approaches." Previously, Yang and Li (2018) divided a passage into non-overlapping text spans, which are named elementary discourse units (EDUs). They followed the criterion of Polanyi (1988) and Irmer (2011) and the guidelines defined by (Carlson and Marcu, 2001). For more information about the initial data collection and annotation, please see the SciDTB's [dataset card](https://huggingface.co/datasets/DFKI-SLT/scidtb).
70
+
71
+ The current authors added a new annotation layer for to the elementary discourse units (EDUs) annotated by Yang & Li (2018), namely, fine-grained argumentative labels and relation labels from the RST Framework. (p. 43)
72
+
73
+ #### Who are the source language producers?
74
+
75
+ No demography or identity information of the source language producer is reported by the authors, but we infer that the dataset was human-generated, specifically the academics in the field of computational linguistics/NLP, and possibly edited by human reviewers.
76
+
77
+ ### Annotations
78
+
79
+ #### Annotation process
80
+
81
+ "We consider a subset of the SciDTB corpus consisting of 60 abstracts from the Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP) and transformed them into a format suitable for the GraPAT graph annotation tool (Sonntag and Stede, 2014).
82
+
83
+ "...The corpus enriched with the argumentation level contains a total of 327 sentences, 8012 tokens, 862 discourse units and 352 argumentative units linked by 292 argumentative relations."
84
+ (p. 43)
85
+
86
+ #### Who are the annotators?
87
+
88
+ \[More Information Needed\]
89
+
90
+ ### Personal and Sensitive Information
91
+
92
+ \[More Information Needed\]
93
+
94
+ ## Considerations for Using the Data
95
+
96
+ ### Social Impact of Dataset
97
+
98
+ "The development of automatic systems to support the quality assessment of scientific texts can facilitate the work of editors and referees of scientific publications and, at the same time, be of value for researchers to obtain feedback that can lead to improve the communication of their results...Aspects such as the argumentative structure of the text are key when analyzing its effectiveness with respect to its communication objectives (Walton and Walton, 1989)." (p. 41)
99
+
100
+ "Being able to extract not only what is being stated by the authors of a text but also the reasons they provide to support it can be useful in multiple applications, ranging from a finegrained analysis of opinions to the generation of abstractive summaries of texts." (p. 41)
101
+
102
+ ### Discussion of Biases
103
+
104
+ "The types of argumentative units are distributed as follows: 31% of the units are of type proposal, 25% assertion, 21% result, 18% means, 3% observation, and 2% description. In turn, the relations are distributed: 45% of type detail, 42% support, 9% additional, and 4% sequence. No attack relations were identified in the set of currently annotated texts."
105
+
106
+ "When considering the distance of the units to their parent unit in the argumentation tree, we observe that the majority (57%) are linked to a unit that occurs right before or after it in the text, while 19% are linked to a unit with a distance of 1 unit in-between, 12% to a unit with a distance of 2 units, 6% to a unit with a distance of 3, and 6% to a unit with a distance of 4 or more."
107
+
108
+ (p. 44)
109
+
110
+ ### Other Known Limitations
111
+
112
+ \[More Information Needed\]
113
+
114
+ ## Additional Information
115
+
116
+ ### Dataset Curators
117
+
118
+ This work is (partly) supported by the Spanish Government under the Marı´a de Maeztu Units of Excellence Programme (MDM-2015-0502). (p. 49)
119
+
120
+ ### Licensing Information
121
+
122
+ \[More Information Needed\]
123
+
124
+ ### Citation Information
125
+
126
+ The current dataset:
127
+
128
+ ```
129
+ @inproceedings{accuosto-saggion-2019-transferring,
130
+ title = "Transferring Knowledge from Discourse to Arguments: A Case Study with Scientific Abstracts",
131
+ author = "Accuosto, Pablo and
132
+ Saggion, Horacio",
133
+ editor = "Stein, Benno and
134
+ Wachsmuth, Henning",
135
+ booktitle = "Proceedings of the 6th Workshop on Argument Mining",
136
+ month = aug,
137
+ year = "2019",
138
+ address = "Florence, Italy",
139
+ publisher = "Association for Computational Linguistics",
140
+ url = "https://aclanthology.org/W19-4505",
141
+ doi = "10.18653/v1/W19-4505",
142
+ pages = "41--51",
143
+ }
144
+ ```
145
+
146
+ The original SciDTB dataset:
147
+
148
+ ```
149
+ @inproceedings{yang-li-2018-scidtb,
150
+ title = "{S}ci{DTB}: Discourse Dependency {T}ree{B}ank for Scientific Abstracts",
151
+ author = "Yang, An and
152
+ Li, Sujian",
153
+ editor = "Gurevych, Iryna and
154
+ Miyao, Yusuke",
155
+ booktitle = "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)",
156
+ month = jul,
157
+ year = "2018",
158
+ address = "Melbourne, Australia",
159
+ publisher = "Association for Computational Linguistics",
160
+ url = "https://aclanthology.org/P18-2071",
161
+ doi = "10.18653/v1/P18-2071",
162
+ pages = "444--449",
163
+ }
164
+ ```
165
+
166
+ ### Contributions
167
+
168
+ Thanks to [@idalr](https://github.com/idalr) for adding this dataset.