ArneBinder commited on
Commit
2b8dff2
·
verified ·
1 Parent(s): 0017228

use pie-modules instead of pytorch-ie

Browse files

see https://github.com/ArneBinder/pie-datasets/pull/204 for further information

Files changed (3) hide show
  1. README.md +174 -4
  2. cdcp.py +143 -143
  3. requirements.txt +2 -2
README.md CHANGED
@@ -1,8 +1,29 @@
1
- # PIE Dataset Card for "CDCP"
2
 
3
  This is a [PyTorch-IE](https://github.com/ChristophAlt/pytorch-ie) wrapper for the
4
  [CDCP Huggingface dataset loading script](https://huggingface.co/datasets/DFKI-SLT/cdcp).
5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  ## Data Schema
7
 
8
  The document type for this dataset is `CDCPDocument` which defines the following data fields:
@@ -17,13 +38,162 @@ and the following annotation layers:
17
  - `relations` (annotation type: `BinaryRelation`, target: `propositions`)
18
  - `urls` (annotation type: `Attribute`, target: `propositions`)
19
 
20
- See [here](https://github.com/ChristophAlt/pytorch-ie/blob/main/src/pytorch_ie/annotations.py) for the annotation type definitions.
21
 
22
  ## Document Converters
23
 
24
  The dataset provides document converters for the following target document types:
25
 
26
- - `pytorch_ie.documents.TextDocumentWithLabeledSpansAndBinaryRelations`
 
 
 
 
 
27
 
28
- See [here](https://github.com/ChristophAlt/pytorch-ie/blob/main/src/pytorch_ie/documents.py) for the document type
29
  definitions.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # PIE Dataset Card for "cdcp"
2
 
3
  This is a [PyTorch-IE](https://github.com/ChristophAlt/pytorch-ie) wrapper for the
4
  [CDCP Huggingface dataset loading script](https://huggingface.co/datasets/DFKI-SLT/cdcp).
5
 
6
+ ## Usage
7
+
8
+ ```python
9
+ from pie_datasets import load_dataset
10
+ from pie_modules.documents import TextDocumentWithLabeledSpansAndBinaryRelations
11
+
12
+ # load English variant
13
+ dataset = load_dataset("pie/cdcp")
14
+
15
+ # if required, normalize the document type (see section Document Converters below)
16
+ dataset_converted = dataset.to_document_type(TextDocumentWithLabeledSpansAndBinaryRelations)
17
+ assert isinstance(dataset_converted["train"][0], TextDocumentWithLabeledSpansAndBinaryRelations)
18
+
19
+ # get first relation in the first document
20
+ doc = dataset_converted["train"][0]
21
+ print(doc.binary_relations[0])
22
+ # BinaryRelation(head=LabeledSpan(start=0, end=78, label='value', score=1.0), tail=LabeledSpan(start=79, end=242, label='value', score=1.0), label='reason', score=1.0)
23
+ print(doc.binary_relations[0].resolve())
24
+ # ('reason', (('value', 'State and local court rules sometimes make default judgments much more likely.'), ('value', 'For example, when a person who allegedly owes a debt is told to come to court on a work day, they may be forced to choose between a default judgment and their job.')))
25
+ ```
26
+
27
  ## Data Schema
28
 
29
  The document type for this dataset is `CDCPDocument` which defines the following data fields:
 
38
  - `relations` (annotation type: `BinaryRelation`, target: `propositions`)
39
  - `urls` (annotation type: `Attribute`, target: `propositions`)
40
 
41
+ See [here](https://github.com/ArneBinder/pie-modules/blob/main/src/pie_modules/annotations.py) for the annotation type definitions.
42
 
43
  ## Document Converters
44
 
45
  The dataset provides document converters for the following target document types:
46
 
47
+ - `pie_modules.documents.TextDocumentWithLabeledSpansAndBinaryRelations`
48
+ - `labeled_spans`: `LabeledSpan` annotations, converted from `CDCPDocument`'s `propositions`
49
+ - labels: `fact`, `policy`, `reference`, `testimony`, `value`
50
+ - if `propositions` contain whitespace at the beginning and/or the end, the whitespace are trimmed out.
51
+ - `binary_relations`: `BinaryRelation` annotations, converted from `CDCPDocument`'s `relations`
52
+ - labels: `reason`, `evidence`
53
 
54
+ See [here](https://github.com/ArneBinder/pie-modules/blob/main/src/pie_modules/documents.py) for the document type
55
  definitions.
56
+
57
+ ### Collected Statistics after Document Conversion
58
+
59
+ We use the script `evaluate_documents.py` from [PyTorch-IE-Hydra-Template](https://github.com/ArneBinder/pytorch-ie-hydra-template-1) to generate these statistics.
60
+ After checking out that code, the statistics and plots can be generated by the command:
61
+
62
+ ```commandline
63
+ python src/evaluate_documents.py dataset=cdcp_base metric=METRIC
64
+ ```
65
+
66
+ where a `METRIC` is called according to the available metric configs in `config/metric/METRIC` (see [metrics](https://github.com/ArneBinder/pytorch-ie-hydra-template-1/tree/main/configs/metric)).
67
+
68
+ This also requires to have the following dataset config in `configs/dataset/cdcp_base.yaml` of this dataset within the repo directory:
69
+
70
+ ```commandline
71
+ _target_: src.utils.execute_pipeline
72
+ input:
73
+ _target_: pie_datasets.DatasetDict.load_dataset
74
+ path: pie/cdcp
75
+ revision: 001722894bdca6df6a472d0d186a3af103e392c5
76
+ ```
77
+
78
+ For token based metrics, this uses `bert-base-uncased` from `transformer.AutoTokenizer` (see [AutoTokenizer](https://huggingface.co/docs/transformers/v4.37.1/en/model_doc/auto#transformers.AutoTokenizer), and [bert-based-uncased](https://huggingface.co/bert-base-uncased) to tokenize `text` in `TextDocumentWithLabeledSpansAndBinaryRelations` (see [document type](https://github.com/ArneBinder/pie-modules/blob/main/src/pie_modules/documents.py)).
79
+
80
+ #### Relation argument (outer) token distance per label
81
+
82
+ The distance is measured from the first token of the first argumentative unit to the last token of the last unit, a.k.a. outer distance.
83
+
84
+ We collect the following statistics: number of documents in the split (*no. doc*), no. of relations (*len*), mean of token distance (*mean*), standard deviation of the distance (*std*), minimum outer distance (*min*), and maximum outer distance (*max*).
85
+ We also present histograms in the collapsible, showing the distribution of these relation distances (x-axis; and unit-counts in y-axis), accordingly.
86
+
87
+ <details>
88
+ <summary>Command</summary>
89
+
90
+ ```
91
+ python src/evaluate_documents.py dataset=cdcp_base metric=relation_argument_token_distances
92
+ ```
93
+
94
+ </details>
95
+
96
+ ##### train (580 documents)
97
+
98
+ | | len | max | mean | min | std |
99
+ | :------- | ---: | --: | -----: | --: | -----: |
100
+ | ALL | 2204 | 240 | 48.839 | 8 | 31.462 |
101
+ | evidence | 94 | 196 | 66.723 | 14 | 42.444 |
102
+ | reason | 2110 | 240 | 48.043 | 8 | 30.64 |
103
+
104
+ <details>
105
+ <summary>Histogram (split: train, 580 documents)</summary>
106
+
107
+ ![rtd-label_cdcp_train.png](img%2Frtd-label_cdcp_train.png)
108
+
109
+ </details>
110
+
111
+ ##### test (150 documents)
112
+
113
+ | | len | max | mean | min | std |
114
+ | :------- | --: | --: | -----: | --: | -----: |
115
+ | ALL | 648 | 212 | 51.299 | 8 | 31.159 |
116
+ | evidence | 52 | 170 | 73.923 | 20 | 39.855 |
117
+ | reason | 596 | 212 | 49.326 | 8 | 29.47 |
118
+
119
+ <details>
120
+ <summary>Histogram (split: test, 150 documents)</summary>
121
+
122
+ ![rtd-label_cdcp_test.png](img%2Frtd-label_cdcp_test.png)
123
+
124
+ </details>
125
+
126
+ #### Span lengths (tokens)
127
+
128
+ The span length is measured from the first token of the first argumentative unit to the last token of the particular unit.
129
+
130
+ We collect the following statistics: number of documents in the split (*no. doc*), no. of spans (*len*), mean of number of tokens in a span (*mean*), standard deviation of the number of tokens (*std*), minimum tokens in a span (*min*), and maximum tokens in a span (*max*).
131
+ We also present histograms in the collapsible, showing the distribution of these token-numbers (x-axis; and unit-counts in y-axis), accordingly.
132
+
133
+ <details>
134
+ <summary>Command</summary>
135
+
136
+ ```
137
+ python src/evaluate_documents.py dataset=cdcp_base metric=span_lengths_tokens
138
+ ```
139
+
140
+ </details>
141
+
142
+ | statistics | train | test |
143
+ | :--------- | -----: | -----: |
144
+ | no. doc | 580 | 150 |
145
+ | len | 3901 | 1026 |
146
+ | mean | 19.441 | 18.758 |
147
+ | std | 11.71 | 10.388 |
148
+ | min | 2 | 3 |
149
+ | max | 142 | 83 |
150
+
151
+ <details>
152
+ <summary>Histogram (split: train, 580 documents)</summary>
153
+
154
+ ![slt_cdcp_train.png](img%2Fslt_cdcp_train.png)
155
+
156
+ </details>
157
+ <details>
158
+ <summary>Histogram (split: test, 150 documents)</summary>
159
+
160
+ ![slt_cdcp_test.png](img%2Fslt_cdcp_test.png)
161
+
162
+ </details>
163
+
164
+ #### Token length (tokens)
165
+
166
+ The token length is measured from the first token of the document to the last one.
167
+
168
+ We collect the following statistics: number of documents in the split (*no. doc*), mean of document token-length (*mean*), standard deviation of the length (*std*), minimum number of tokens in a document (*min*), and maximum number of tokens in a document (*max*).
169
+ We also present histograms in the collapsible, showing the distribution of these token lengths (x-axis; and unit-counts in y-axis), accordingly.
170
+
171
+ <details>
172
+ <summary>Command</summary>
173
+
174
+ ```
175
+ python src/evaluate_documents.py dataset=cdcp_base metric=count_text_tokens
176
+ ```
177
+
178
+ </details>
179
+
180
+ | statistics | train | test |
181
+ | :--------- | ------: | ------: |
182
+ | no. doc | 580 | 150 |
183
+ | mean | 130.781 | 128.673 |
184
+ | std | 101.121 | 98.708 |
185
+ | min | 13 | 15 |
186
+ | max | 562 | 571 |
187
+
188
+ <details>
189
+ <summary>Histogram (split: train, 580 documents)</summary>
190
+
191
+ ![tl_cdcp_train.png](img%2Ftl_cdcp_train.png)
192
+
193
+ </details>
194
+ <details>
195
+ <summary>Histogram (split: test, 150 documents)</summary>
196
+
197
+ ![tl_cdcp_test.png](img%2Ftl_cdcp_test.png)
198
+
199
+ </details>
cdcp.py CHANGED
@@ -1,143 +1,143 @@
1
- import dataclasses
2
- import logging
3
- from typing import Any, Dict, List, Optional
4
-
5
- import datasets
6
- from pie_modules.document.processing.text_span_trimmer import trim_text_spans
7
- from pytorch_ie.annotations import BinaryRelation, LabeledSpan
8
- from pytorch_ie.core import Annotation, AnnotationList, annotation_field
9
- from pytorch_ie.documents import (
10
- TextBasedDocument,
11
- TextDocumentWithLabeledSpansAndBinaryRelations,
12
- )
13
-
14
- from pie_datasets import GeneratorBasedBuilder
15
-
16
- log = logging.getLogger(__name__)
17
-
18
-
19
- def dl2ld(dict_of_lists):
20
- return [dict(zip(dict_of_lists, t)) for t in zip(*dict_of_lists.values())]
21
-
22
-
23
- def ld2dl(list_of_dicts, keys: Optional[List[str]] = None):
24
- return {k: [d[k] for d in list_of_dicts] for k in keys}
25
-
26
-
27
- @dataclasses.dataclass(frozen=True)
28
- class Attribute(Annotation):
29
- value: str
30
- annotation: Annotation
31
-
32
-
33
- @dataclasses.dataclass
34
- class CDCPDocument(TextBasedDocument):
35
- propositions: AnnotationList[LabeledSpan] = annotation_field(target="text")
36
- relations: AnnotationList[BinaryRelation] = annotation_field(target="propositions")
37
- urls: AnnotationList[Attribute] = annotation_field(target="propositions")
38
-
39
-
40
- def example_to_document(
41
- example: Dict[str, Any],
42
- relation_label: datasets.ClassLabel,
43
- proposition_label: datasets.ClassLabel,
44
- ):
45
- document = CDCPDocument(id=example["id"], text=example["text"])
46
- for proposition_dict in dl2ld(example["propositions"]):
47
- proposition = LabeledSpan(
48
- start=proposition_dict["start"],
49
- end=proposition_dict["end"],
50
- label=proposition_label.int2str(proposition_dict["label"]),
51
- )
52
- document.propositions.append(proposition)
53
- if proposition_dict.get("url", "") != "":
54
- url = Attribute(annotation=proposition, value=proposition_dict["url"])
55
- document.urls.append(url)
56
-
57
- for relation_dict in dl2ld(example["relations"]):
58
- relation = BinaryRelation(
59
- head=document.propositions[relation_dict["head"]],
60
- tail=document.propositions[relation_dict["tail"]],
61
- label=relation_label.int2str(relation_dict["label"]),
62
- )
63
- document.relations.append(relation)
64
-
65
- return document
66
-
67
-
68
- def document_to_example(
69
- document: CDCPDocument,
70
- relation_label: datasets.ClassLabel,
71
- proposition_label: datasets.ClassLabel,
72
- ) -> Dict[str, Any]:
73
- result = {"id": document.id, "text": document.text}
74
- proposition2dict = {}
75
- proposition2idx = {}
76
- for idx, proposition in enumerate(document.propositions):
77
- proposition2dict[proposition] = {
78
- "start": proposition.start,
79
- "end": proposition.end,
80
- "label": proposition_label.str2int(proposition.label),
81
- "url": "",
82
- }
83
- proposition2idx[proposition] = idx
84
- for url in document.urls:
85
- proposition2dict[url.annotation]["url"] = url.value
86
-
87
- result["propositions"] = ld2dl(
88
- proposition2dict.values(), keys=["start", "end", "label", "url"]
89
- )
90
-
91
- relations = [
92
- {
93
- "head": proposition2idx[relation.head],
94
- "tail": proposition2idx[relation.tail],
95
- "label": relation_label.str2int(relation.label),
96
- }
97
- for relation in document.relations
98
- ]
99
- result["relations"] = ld2dl(relations, keys=["head", "tail", "label"])
100
-
101
- return result
102
-
103
-
104
- def convert_to_text_document_with_labeled_spans_and_binary_relations(
105
- document: CDCPDocument,
106
- verbose: bool = True,
107
- ) -> TextDocumentWithLabeledSpansAndBinaryRelations:
108
- doc_simplified = document.as_type(
109
- TextDocumentWithLabeledSpansAndBinaryRelations,
110
- field_mapping={"propositions": "labeled_spans", "relations": "binary_relations"},
111
- )
112
- result = trim_text_spans(
113
- doc_simplified,
114
- layer="labeled_spans",
115
- verbose=verbose,
116
- )
117
- return result
118
-
119
-
120
- class CDCP(GeneratorBasedBuilder):
121
- DOCUMENT_TYPE = CDCPDocument
122
-
123
- DOCUMENT_CONVERTERS = {
124
- TextDocumentWithLabeledSpansAndBinaryRelations: convert_to_text_document_with_labeled_spans_and_binary_relations
125
- }
126
-
127
- BASE_DATASET_PATH = "DFKI-SLT/cdcp"
128
- BASE_DATASET_REVISION = "3cf79257900b3f97e4b8f9faae2484b1a534f484"
129
-
130
- BUILDER_CONFIGS = [datasets.BuilderConfig(name="default")]
131
-
132
- DEFAULT_CONFIG_NAME = "default" # type: ignore
133
-
134
- def _generate_document_kwargs(self, dataset):
135
- return {
136
- "relation_label": dataset.features["relations"].feature["label"],
137
- "proposition_label": dataset.features["propositions"].feature["label"],
138
- }
139
-
140
- def _generate_document(self, example, relation_label, proposition_label):
141
- return example_to_document(
142
- example, relation_label=relation_label, proposition_label=proposition_label
143
- )
 
1
+ import dataclasses
2
+ import logging
3
+ from typing import Any, Dict, List, Optional
4
+
5
+ import datasets
6
+ from pie_core import Annotation, AnnotationLayer, annotation_field
7
+ from pie_modules.annotations import BinaryRelation, LabeledSpan
8
+ from pie_modules.document.processing.text_span_trimmer import trim_text_spans
9
+ from pie_modules.documents import (
10
+ TextBasedDocument,
11
+ TextDocumentWithLabeledSpansAndBinaryRelations,
12
+ )
13
+
14
+ from pie_datasets import GeneratorBasedBuilder
15
+
16
+ log = logging.getLogger(__name__)
17
+
18
+
19
+ def dl2ld(dict_of_lists):
20
+ return [dict(zip(dict_of_lists, t)) for t in zip(*dict_of_lists.values())]
21
+
22
+
23
+ def ld2dl(list_of_dicts, keys: Optional[List[str]] = None):
24
+ return {k: [d[k] for d in list_of_dicts] for k in keys}
25
+
26
+
27
+ @dataclasses.dataclass(frozen=True)
28
+ class Attribute(Annotation):
29
+ value: str
30
+ annotation: Annotation
31
+
32
+
33
+ @dataclasses.dataclass
34
+ class CDCPDocument(TextBasedDocument):
35
+ propositions: AnnotationLayer[LabeledSpan] = annotation_field(target="text")
36
+ relations: AnnotationLayer[BinaryRelation] = annotation_field(target="propositions")
37
+ urls: AnnotationLayer[Attribute] = annotation_field(target="propositions")
38
+
39
+
40
+ def example_to_document(
41
+ example: Dict[str, Any],
42
+ relation_label: datasets.ClassLabel,
43
+ proposition_label: datasets.ClassLabel,
44
+ ):
45
+ document = CDCPDocument(id=example["id"], text=example["text"])
46
+ for proposition_dict in dl2ld(example["propositions"]):
47
+ proposition = LabeledSpan(
48
+ start=proposition_dict["start"],
49
+ end=proposition_dict["end"],
50
+ label=proposition_label.int2str(proposition_dict["label"]),
51
+ )
52
+ document.propositions.append(proposition)
53
+ if proposition_dict.get("url", "") != "":
54
+ url = Attribute(annotation=proposition, value=proposition_dict["url"])
55
+ document.urls.append(url)
56
+
57
+ for relation_dict in dl2ld(example["relations"]):
58
+ relation = BinaryRelation(
59
+ head=document.propositions[relation_dict["head"]],
60
+ tail=document.propositions[relation_dict["tail"]],
61
+ label=relation_label.int2str(relation_dict["label"]),
62
+ )
63
+ document.relations.append(relation)
64
+
65
+ return document
66
+
67
+
68
+ def document_to_example(
69
+ document: CDCPDocument,
70
+ relation_label: datasets.ClassLabel,
71
+ proposition_label: datasets.ClassLabel,
72
+ ) -> Dict[str, Any]:
73
+ result = {"id": document.id, "text": document.text}
74
+ proposition2dict = {}
75
+ proposition2idx = {}
76
+ for idx, proposition in enumerate(document.propositions):
77
+ proposition2dict[proposition] = {
78
+ "start": proposition.start,
79
+ "end": proposition.end,
80
+ "label": proposition_label.str2int(proposition.label),
81
+ "url": "",
82
+ }
83
+ proposition2idx[proposition] = idx
84
+ for url in document.urls:
85
+ proposition2dict[url.annotation]["url"] = url.value
86
+
87
+ result["propositions"] = ld2dl(
88
+ proposition2dict.values(), keys=["start", "end", "label", "url"]
89
+ )
90
+
91
+ relations = [
92
+ {
93
+ "head": proposition2idx[relation.head],
94
+ "tail": proposition2idx[relation.tail],
95
+ "label": relation_label.str2int(relation.label),
96
+ }
97
+ for relation in document.relations
98
+ ]
99
+ result["relations"] = ld2dl(relations, keys=["head", "tail", "label"])
100
+
101
+ return result
102
+
103
+
104
+ def convert_to_text_document_with_labeled_spans_and_binary_relations(
105
+ document: CDCPDocument,
106
+ verbose: bool = True,
107
+ ) -> TextDocumentWithLabeledSpansAndBinaryRelations:
108
+ doc_simplified = document.as_type(
109
+ TextDocumentWithLabeledSpansAndBinaryRelations,
110
+ field_mapping={"propositions": "labeled_spans", "relations": "binary_relations"},
111
+ )
112
+ result = trim_text_spans(
113
+ doc_simplified,
114
+ layer="labeled_spans",
115
+ verbose=verbose,
116
+ )
117
+ return result
118
+
119
+
120
+ class CDCP(GeneratorBasedBuilder):
121
+ DOCUMENT_TYPE = CDCPDocument
122
+
123
+ DOCUMENT_CONVERTERS = {
124
+ TextDocumentWithLabeledSpansAndBinaryRelations: convert_to_text_document_with_labeled_spans_and_binary_relations
125
+ }
126
+
127
+ BASE_DATASET_PATH = "DFKI-SLT/cdcp"
128
+ BASE_DATASET_REVISION = "3cf79257900b3f97e4b8f9faae2484b1a534f484"
129
+
130
+ BUILDER_CONFIGS = [datasets.BuilderConfig(name="default")]
131
+
132
+ DEFAULT_CONFIG_NAME = "default" # type: ignore
133
+
134
+ def _generate_document_kwargs(self, dataset):
135
+ return {
136
+ "relation_label": dataset.features["relations"].feature["label"],
137
+ "proposition_label": dataset.features["propositions"].feature["label"],
138
+ }
139
+
140
+ def _generate_document(self, example, relation_label, proposition_label):
141
+ return example_to_document(
142
+ example, relation_label=relation_label, proposition_label=proposition_label
143
+ )
requirements.txt CHANGED
@@ -1,2 +1,2 @@
1
- pie-datasets>=0.6.0,<0.9.0
2
- pie-modules>=0.8.0,<0.9.0
 
1
+ pie-datasets>=0.10.11,<0.11.0
2
+ pie-modules>=0.15.9,<0.16.0