ArneBinder
commited on
https://github.com/ArneBinder/pie-datasets/pull/100
Browse files- README.md +181 -16
- img/rtd-label_aae2_test.png +3 -0
- img/rtd-label_aae2_train.png +3 -0
- img/sg17f2.png +3 -0
- img/slt_aae2_test.png +3 -0
- img/slt_aae2_train.png +3 -0
- img/tl_aae2_test.png +3 -0
- img/tl_aae2_train.png +3 -0
- requirements.txt +2 -2
README.md
CHANGED
@@ -4,11 +4,35 @@ This is a [PyTorch-IE](https://github.com/ChristophAlt/pytorch-ie) wrapper for t
|
|
4 |
|
5 |
Therefore, the `aae2` dataset as described here follows the data structure from the [PIE brat dataset card](https://huggingface.co/datasets/pie/brat).
|
6 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
7 |
### Dataset Summary
|
8 |
|
9 |
Argument Annotated Essays Corpus (AAEC) ([Stab and Gurevych, 2017](https://aclanthology.org/J17-3005.pdf)) contains student essays. A stance for a controversial theme is expressed by a major claim component as well as claim components, and premise components justify or refute the claims. Attack and support labels are defined as relations. The span covers a statement, *which can stand in isolation as a complete sentence*, according to the AAEC annotation guidelines. All components are annotated with minimum boundaries of a clause or sentence excluding so-called "shell" language such as *On the other hand* and *Hence*. (Morio et al., 2022, p. 642)
|
10 |
|
11 |
-
|
|
|
12 |
|
13 |
### Supported Tasks and Leaderboards
|
14 |
|
@@ -27,17 +51,6 @@ The `aae2` dataset comes in a single version (`default`) with `BratDocumentWithM
|
|
27 |
|
28 |
See [PIE-Brat Data Schema](https://huggingface.co/datasets/pie/brat#data-schema).
|
29 |
|
30 |
-
### Usage
|
31 |
-
|
32 |
-
```python
|
33 |
-
from pie_datasets import load_dataset, builders
|
34 |
-
|
35 |
-
# load default version
|
36 |
-
datasets = load_dataset("pie/aae2")
|
37 |
-
doc = datasets["train"][0]
|
38 |
-
assert isinstance(doc, builders.brat.BratDocumentWithMergedSpans)
|
39 |
-
```
|
40 |
-
|
41 |
### Data Splits
|
42 |
|
43 |
| Statistics | Train | Test |
|
@@ -50,7 +63,7 @@ assert isinstance(doc, builders.brat.BratDocumentWithMergedSpans)
|
|
50 |
|
51 |
See further statistics in Stab & Gurevych (2017), p. 650, Table A.1.
|
52 |
|
53 |
-
### Label Descriptions
|
54 |
|
55 |
#### Components
|
56 |
|
@@ -64,8 +77,6 @@ See further statistics in Stab & Gurevych (2017), p. 650, Table A.1.
|
|
64 |
- `Claim` constitutes the central component of each argument. Each one has at least one premise and takes stance attribute values "for" or "against" with regarding the major claim.
|
65 |
- `Premise` is the reasons of the argument; either linked to claim or another premise.
|
66 |
|
67 |
-
**Note that** relations between `MajorClaim` and `Claim` were not annotated; however, each claim is annotated with an `Attribute` annotation with value `for` or `against` - which indicates the relation between itself and `MajorClaim`. In addition, when two non-related `Claim` 's appear in one paragraph, there is also no relations to one another.
|
68 |
-
|
69 |
#### Relations
|
70 |
|
71 |
| Relations | Count | Percentage |
|
@@ -79,6 +90,12 @@ See further statistics in Stab & Gurevych (2017), p. 650, Table A.1.
|
|
79 |
|
80 |
See further description in Stab & Gurevych 2017, p.627 and the [annotation guideline](https://github.com/ArneBinder/pie-datasets/blob/db94035602610cefca2b1678aa2fe4455c96155d/data/datasets/ArgumentAnnotatedEssays-2.0/guideline.pdf).
|
81 |
|
|
|
|
|
|
|
|
|
|
|
|
|
82 |
### Document Converters
|
83 |
|
84 |
The dataset provides document converters for the following target document types:
|
@@ -104,7 +121,7 @@ The dataset provides document converters for the following target document types
|
|
104 |
See [here](https://github.com/ChristophAlt/pytorch-ie/blob/main/src/pytorch_ie/documents.py) for the document type
|
105 |
definitions.
|
106 |
|
107 |
-
#### Label Statistics after Document Conversion
|
108 |
|
109 |
When converting from `BratDocumentWithMergedSpan` to `TextDocumentWithLabeledSpansAndBinaryRelations` and `TextDocumentWithLabeledSpansBinaryRelationsAndLabeledPartitions`,
|
110 |
we apply a relation-conversion method (see above) that changes the label counts for the relations, as follows:
|
@@ -124,6 +141,154 @@ we apply a relation-conversion method (see above) that changes the label counts
|
|
124 |
| support: `supports` | 5958 | 89.3 % |
|
125 |
| attack: `attacks` | 715 | 10.7 % |
|
126 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
127 |
## Dataset Creation
|
128 |
|
129 |
### Curation Rationale
|
|
|
4 |
|
5 |
Therefore, the `aae2` dataset as described here follows the data structure from the [PIE brat dataset card](https://huggingface.co/datasets/pie/brat).
|
6 |
|
7 |
+
### Usage
|
8 |
+
|
9 |
+
```python
|
10 |
+
from pie_datasets import load_dataset
|
11 |
+
from pie_datasets.builders.brat import BratDocumentWithMergedSpans
|
12 |
+
from pytorch_ie.documents import TextDocumentWithLabeledSpansBinaryRelationsAndLabeledPartitions
|
13 |
+
|
14 |
+
# load default version
|
15 |
+
dataset = load_dataset("pie/aae2")
|
16 |
+
assert isinstance(dataset["train"][0], BratDocumentWithMergedSpans)
|
17 |
+
|
18 |
+
# if required, normalize the document type (see section Document Converters below)
|
19 |
+
dataset_converted = dataset.to_document_type(TextDocumentWithLabeledSpansBinaryRelationsAndLabeledPartitions)
|
20 |
+
assert isinstance(dataset_converted["train"][0], TextDocumentWithLabeledSpansBinaryRelationsAndLabeledPartitions)
|
21 |
+
|
22 |
+
# get first relation in the first document
|
23 |
+
doc = dataset_converted["train"][0]
|
24 |
+
print(doc.binary_relations[0])
|
25 |
+
# BinaryRelation(head=LabeledSpan(start=716, end=851, label='Premise', score=1.0), tail=LabeledSpan(start=591, end=714, label='Claim', score=1.0), label='supports', score=1.0)
|
26 |
+
print(doc.binary_relations[0].resolve())
|
27 |
+
# ('supports', (('Premise', 'What we acquired from team work is not only how to achieve the same goal with others but more importantly, how to get along with others'), ('Claim', 'through cooperation, children can learn about interpersonal skills which are significant in the future life of all students')))
|
28 |
+
```
|
29 |
+
|
30 |
### Dataset Summary
|
31 |
|
32 |
Argument Annotated Essays Corpus (AAEC) ([Stab and Gurevych, 2017](https://aclanthology.org/J17-3005.pdf)) contains student essays. A stance for a controversial theme is expressed by a major claim component as well as claim components, and premise components justify or refute the claims. Attack and support labels are defined as relations. The span covers a statement, *which can stand in isolation as a complete sentence*, according to the AAEC annotation guidelines. All components are annotated with minimum boundaries of a clause or sentence excluding so-called "shell" language such as *On the other hand* and *Hence*. (Morio et al., 2022, p. 642)
|
33 |
|
34 |
+
In the original dataset, there is no premise that links to another premise or claim in a different paragraph. That means, an argumentation tree structure is complete within each paragraph. Therefore, it is possible to train a model on the full documents or just at the paragraph-level which is usually less memory-exhaustive (Eger et al., 2017, p. 16).
|
35 |
+
However, through our `DOCUMENT_CONVERTERS`, we build links between claims, creating a graph structure throughout an entire essay (see [Document Converters](#document-converters)).
|
36 |
|
37 |
### Supported Tasks and Leaderboards
|
38 |
|
|
|
51 |
|
52 |
See [PIE-Brat Data Schema](https://huggingface.co/datasets/pie/brat#data-schema).
|
53 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
54 |
### Data Splits
|
55 |
|
56 |
| Statistics | Train | Test |
|
|
|
63 |
|
64 |
See further statistics in Stab & Gurevych (2017), p. 650, Table A.1.
|
65 |
|
66 |
+
### Label Descriptions and Statistics
|
67 |
|
68 |
#### Components
|
69 |
|
|
|
77 |
- `Claim` constitutes the central component of each argument. Each one has at least one premise and takes stance attribute values "for" or "against" with regarding the major claim.
|
78 |
- `Premise` is the reasons of the argument; either linked to claim or another premise.
|
79 |
|
|
|
|
|
80 |
#### Relations
|
81 |
|
82 |
| Relations | Count | Percentage |
|
|
|
90 |
|
91 |
See further description in Stab & Gurevych 2017, p.627 and the [annotation guideline](https://github.com/ArneBinder/pie-datasets/blob/db94035602610cefca2b1678aa2fe4455c96155d/data/datasets/ArgumentAnnotatedEssays-2.0/guideline.pdf).
|
92 |
|
93 |
+
**Note that** relations between `MajorClaim` and `Claim` were not annotated; however, each claim is annotated with an `Attribute` annotation with value `for` or `against` - which indicates the relation between itself and `MajorClaim`. In addition, when two non-related `Claim` 's appear in one paragraph, there is also no relations to one another. An example of a document is shown here below.
|
94 |
+
|
95 |
+
#### Example
|
96 |
+
|
97 |
+
![Example](img/sg17f2.png)
|
98 |
+
|
99 |
### Document Converters
|
100 |
|
101 |
The dataset provides document converters for the following target document types:
|
|
|
121 |
See [here](https://github.com/ChristophAlt/pytorch-ie/blob/main/src/pytorch_ie/documents.py) for the document type
|
122 |
definitions.
|
123 |
|
124 |
+
#### Relation Label Statistics after Document Conversion
|
125 |
|
126 |
When converting from `BratDocumentWithMergedSpan` to `TextDocumentWithLabeledSpansAndBinaryRelations` and `TextDocumentWithLabeledSpansBinaryRelationsAndLabeledPartitions`,
|
127 |
we apply a relation-conversion method (see above) that changes the label counts for the relations, as follows:
|
|
|
141 |
| support: `supports` | 5958 | 89.3 % |
|
142 |
| attack: `attacks` | 715 | 10.7 % |
|
143 |
|
144 |
+
### Collected Statistics after Document Conversion
|
145 |
+
|
146 |
+
We use the script `evaluate_documents.py` from [PyTorch-IE-Hydra-Template](https://github.com/ArneBinder/pytorch-ie-hydra-template-1) to generate these statistics.
|
147 |
+
After checking out that code, the statistics and plots can be generated by the command:
|
148 |
+
|
149 |
+
```commandline
|
150 |
+
python src/evaluate_documents.py dataset=aae2_base metric=METRIC
|
151 |
+
```
|
152 |
+
|
153 |
+
where a `METRIC` is called according to the available metric configs in `config/metric/METRIC` (see [metrics](https://github.com/ArneBinder/pytorch-ie-hydra-template-1/tree/main/configs/metric)).
|
154 |
+
|
155 |
+
This also requires to have the following dataset config in `configs/dataset/aae2_base.yaml` of this dataset within the repo directory:
|
156 |
+
|
157 |
+
```commandline
|
158 |
+
_target_: src.utils.execute_pipeline
|
159 |
+
input:
|
160 |
+
_target_: pie_datasets.DatasetDict.load_dataset
|
161 |
+
path: pie/aae2
|
162 |
+
revision: 1015ee38bd8a36549b344008f7a49af72956a7fe
|
163 |
+
```
|
164 |
+
|
165 |
+
For token based metrics, this uses `bert-base-uncased` from `transformer.AutoTokenizer` (see [AutoTokenizer](https://huggingface.co/docs/transformers/v4.37.1/en/model_doc/auto#transformers.AutoTokenizer), and [bert-based-uncased](https://huggingface.co/bert-base-uncased) to tokenize `text` in `TextDocumentWithLabeledSpansAndBinaryRelations` (see [document type](https://github.com/ChristophAlt/pytorch-ie/blob/main/src/pytorch_ie/documents.py)).
|
166 |
+
|
167 |
+
For relation-label statistics, we collect those from the default relation conversion method, i.e., `connect_first`, resulting in three distinct relation labels.
|
168 |
+
|
169 |
+
#### Relation argument (outer) token distance per label
|
170 |
+
|
171 |
+
The distance is measured from the first token of the first argumentative unit to the last token of the last unit, a.k.a. outer distance.
|
172 |
+
|
173 |
+
We collect the following statistics: number of documents in the split (*no. doc*), no. of relations (*len*), mean of token distance (*mean*), standard deviation of the distance (*std*), minimum outer distance (*min*), and maximum outer distance (*max*).
|
174 |
+
We also present histograms in the collapsible, showing the distribution of these relation distances (x-axis; and unit-counts in y-axis), accordingly.
|
175 |
+
|
176 |
+
<details>
|
177 |
+
<summary>Command</summary>
|
178 |
+
|
179 |
+
```
|
180 |
+
python src/evaluate_documents.py dataset=aae2_base metric=relation_argument_token_distances
|
181 |
+
```
|
182 |
+
|
183 |
+
</details>
|
184 |
+
|
185 |
+
##### train (322 documents)
|
186 |
+
|
187 |
+
| | len | max | mean | min | std |
|
188 |
+
| :---------------- | ---: | --: | ------: | --: | ------: |
|
189 |
+
| ALL | 9002 | 514 | 102.582 | 9 | 93.76 |
|
190 |
+
| attacks | 810 | 442 | 127.622 | 10 | 109.283 |
|
191 |
+
| semantically_same | 552 | 514 | 301.638 | 25 | 73.756 |
|
192 |
+
| supports | 7640 | 493 | 85.545 | 9 | 74.023 |
|
193 |
+
|
194 |
+
<details>
|
195 |
+
<summary>Histogram (split: train, 322 documents)</summary>
|
196 |
+
|
197 |
+
![rtd-label_aae2_train.png](img%2Frtd-label_aae2_train.png)
|
198 |
+
|
199 |
+
</details>
|
200 |
+
|
201 |
+
##### test (80 documents)
|
202 |
+
|
203 |
+
| | len | max | mean | min | std |
|
204 |
+
| :---------------- | ---: | --: | ------: | --: | -----: |
|
205 |
+
| ALL | 2372 | 442 | 100.711 | 10 | 92.698 |
|
206 |
+
| attacks | 184 | 402 | 115.891 | 12 | 98.751 |
|
207 |
+
| semantically_same | 146 | 442 | 299.671 | 34 | 72.921 |
|
208 |
+
| supports | 2042 | 437 | 85.118 | 10 | 75.023 |
|
209 |
+
|
210 |
+
<details>
|
211 |
+
<summary>Histogram (split: test, 80 documents)</summary>
|
212 |
+
|
213 |
+
![rtd-label_aae2_test.png](img%2Frtd-label_aae2_test.png)
|
214 |
+
|
215 |
+
</details>
|
216 |
+
|
217 |
+
#### Span lengths (tokens)
|
218 |
+
|
219 |
+
The span length is measured from the first token of the first argumentative unit to the last token of the particular unit.
|
220 |
+
|
221 |
+
We collect the following statistics: number of documents in the split (*no. doc*), no. of spans (*len*), mean of number of tokens in a span (*mean*), standard deviation of the number of tokens (*std*), minimum tokens in a span (*min*), and maximum tokens in a span (*max*).
|
222 |
+
We also present histograms in the collapsible, showing the distribution of these token-numbers (x-axis; and unit-counts in y-axis), accordingly.
|
223 |
+
|
224 |
+
<details>
|
225 |
+
<summary>Command</summary>
|
226 |
+
|
227 |
+
```
|
228 |
+
python src/evaluate_documents.py dataset=aae2_base metric=span_lengths_tokens
|
229 |
+
```
|
230 |
+
|
231 |
+
</details>
|
232 |
+
|
233 |
+
| statistics | train | test |
|
234 |
+
| :--------- | -----: | -----: |
|
235 |
+
| no. doc | 322 | 80 |
|
236 |
+
| len | 4823 | 1266 |
|
237 |
+
| mean | 17.157 | 16.317 |
|
238 |
+
| std | 8.079 | 7.953 |
|
239 |
+
| min | 3 | 3 |
|
240 |
+
| max | 75 | 50 |
|
241 |
+
|
242 |
+
<details>
|
243 |
+
<summary>Histogram (split: train, 332 documents)</summary>
|
244 |
+
|
245 |
+
![slt_aae2_train.png](img%2Fslt_aae2_train.png)
|
246 |
+
|
247 |
+
</details>
|
248 |
+
<details>
|
249 |
+
<summary>Histogram (split: test, 80 documents)</summary>
|
250 |
+
|
251 |
+
![slt_aae2_test.png](img%2Fslt_aae2_test.png)
|
252 |
+
|
253 |
+
</details>
|
254 |
+
|
255 |
+
#### Token length (tokens)
|
256 |
+
|
257 |
+
The token length is measured from the first token of the document to the last one.
|
258 |
+
|
259 |
+
We collect the following statistics: number of documents in the split (*no. doc*), mean of document token-length (*mean*), standard deviation of the length (*std*), minimum number of tokens in a document (*min*), and maximum number of tokens in a document (*max*).
|
260 |
+
We also present histograms in the collapsible, showing the distribution of these token lengths (x-axis; and unit-counts in y-axis), accordingly.
|
261 |
+
|
262 |
+
<details>
|
263 |
+
<summary>Command</summary>
|
264 |
+
|
265 |
+
```
|
266 |
+
python src/evaluate_documents.py dataset=aae2_base metric=count_text_tokens
|
267 |
+
```
|
268 |
+
|
269 |
+
</details>
|
270 |
+
|
271 |
+
| statistics | train | test |
|
272 |
+
| :--------- | ------: | -----: |
|
273 |
+
| no. doc | 322 | 80 |
|
274 |
+
| mean | 377.686 | 378.4 |
|
275 |
+
| std | 64.534 | 66.054 |
|
276 |
+
| min | 236 | 269 |
|
277 |
+
| max | 580 | 532 |
|
278 |
+
|
279 |
+
<details>
|
280 |
+
<summary>Histogram (split: train, 332 documents)</summary>
|
281 |
+
|
282 |
+
![tl_aae2_train.png](img%2Ftl_aae2_train.png)
|
283 |
+
|
284 |
+
</details>
|
285 |
+
<details>
|
286 |
+
<summary>Histogram (split: test, 80 documents)</summary>
|
287 |
+
|
288 |
+
![tl_aae2_test.png](img%2Ftl_aae2_test.png)
|
289 |
+
|
290 |
+
</details>
|
291 |
+
|
292 |
## Dataset Creation
|
293 |
|
294 |
### Curation Rationale
|
img/rtd-label_aae2_test.png
ADDED
Git LFS Details
|
img/rtd-label_aae2_train.png
ADDED
Git LFS Details
|
img/sg17f2.png
ADDED
Git LFS Details
|
img/slt_aae2_test.png
ADDED
Git LFS Details
|
img/slt_aae2_train.png
ADDED
Git LFS Details
|
img/tl_aae2_test.png
ADDED
Git LFS Details
|
img/tl_aae2_train.png
ADDED
Git LFS Details
|
requirements.txt
CHANGED
@@ -1,2 +1,2 @@
|
|
1 |
-
pie-datasets>=0.8.0,<0.
|
2 |
-
pie-modules>=0.8.3,<0.
|
|
|
1 |
+
pie-datasets>=0.8.0,<0.11.0
|
2 |
+
pie-modules>=0.8.3,<0.12.0
|