RichardWang commited on
Commit
1b317da
1 Parent(s): cf26a67

add ontonotes_conll dataset (#3853)

Browse files

* add ontonotesv5_conll2012 dataset

* Apply suggestions from code review

Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>

* rename, fix doc, fix dummy_data

* fix flake8

* Apply suggestions from code review

* typo

Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>

Commit from https://github.com/huggingface/datasets/commit/8f205aa1c722cfc7479a714ae44cf1f712ebb61d

README.md ADDED
@@ -0,0 +1,233 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - ar
8
+ - en
9
+ - zh
10
+ licenses:
11
+ - cc-by-nc-nd-4-0
12
+ multilinguality:
13
+ - multilingual
14
+ paperswithcode_id: ontonotes-5-0
15
+ pretty_name: CoNLL2012 shared task data based on OntoNotes 5-0
16
+ size_categories:
17
+ - 10K<n<100K
18
+ source_datasets:
19
+ - original
20
+ task_categories:
21
+ - structure-prediction
22
+ task_ids:
23
+ - named-entity-recognition
24
+ - part-of-speech-tagging
25
+ - semantic-role-labeling
26
+ - coreference-resolution
27
+ - parsing
28
+ - lemmatization
29
+ - word-sense-disambiguation
30
+ ---
31
+
32
+ # Dataset Card for CoNLL2012 shared task data based on OntoNotes 5.0
33
+
34
+ ## Table of Contents
35
+ - [Table of Contents](#table-of-contents)
36
+ - [Dataset Description](#dataset-description)
37
+ - [Dataset Summary](#dataset-summary)
38
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
39
+ - [Languages](#languages)
40
+ - [Dataset Structure](#dataset-structure)
41
+ - [Data Instances](#data-instances)
42
+ - [Data Fields](#data-fields)
43
+ - [Data Splits](#data-splits)
44
+ - [Dataset Creation](#dataset-creation)
45
+ - [Curation Rationale](#curation-rationale)
46
+ - [Source Data](#source-data)
47
+ - [Annotations](#annotations)
48
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
49
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
50
+ - [Social Impact of Dataset](#social-impact-of-dataset)
51
+ - [Discussion of Biases](#discussion-of-biases)
52
+ - [Other Known Limitations](#other-known-limitations)
53
+ - [Additional Information](#additional-information)
54
+ - [Dataset Curators](#dataset-curators)
55
+ - [Licensing Information](#licensing-information)
56
+ - [Citation Information](#citation-information)
57
+ - [Contributions](#contributions)
58
+
59
+ ## Dataset Description
60
+
61
+ - **Homepage:** [CoNLL-2012 Shared Task](https://conll.cemantix.org/2012/data.html), [Author's page](https://cemantix.org/data/ontonotes.html)
62
+ - **Repository:** [Mendeley](https://data.mendeley.com/datasets/zmycy7t9h9)
63
+ - **Paper:** [Towards Robust Linguistic Analysis using OntoNotes](https://aclanthology.org/W13-3516/)
64
+ - **Leaderboard:**
65
+ - **Point of Contact:**
66
+
67
+ ### Dataset Summary
68
+
69
+ OntoNotes v5.0 is the final version of OntoNotes corpus, and is a large-scale, multi-genre,
70
+ multilingual corpus manually annotated with syntactic, semantic and discourse information.
71
+
72
+ This dataset is the version of OntoNotes v5.0 extended and is used in the CoNLL-2012 shared task.
73
+ It includes v4 train/dev and v9 test data for English/Chinese/Arabic and corrected version v12 train/dev/test data (English only).
74
+
75
+ The source of data is the Mendeley Data repo [ontonotes-conll2012](https://data.mendeley.com/datasets/zmycy7t9h9), which seems to be as the same as the official data, but users should use this dataset on their own responsibility.
76
+
77
+ See also summaries from paperwithcode, [OntoNotes 5.0](https://paperswithcode.com/dataset/ontonotes-5-0) and [CoNLL-2012](https://paperswithcode.com/dataset/conll-2012-1)
78
+
79
+ For more detailed info of the dataset like annotation, tag set, etc., you can refer to the documents in the Mendeley repo mentioned above.
80
+
81
+ ### Supported Tasks and Leaderboards
82
+
83
+ - [Named Entity Recognition on Ontonotes v5 (English)](https://paperswithcode.com/sota/named-entity-recognition-ner-on-ontonotes-v5)
84
+ - [Coreference Resolution on OntoNotes](https://paperswithcode.com/sota/coreference-resolution-on-ontonotes)
85
+ - [Semantic Role Labeling on OntoNotes](https://paperswithcode.com/sota/semantic-role-labeling-on-ontonotes)
86
+ - ...
87
+
88
+ ### Languages
89
+
90
+ V4 data for Arabic, Chinese, English, and V12 data for English
91
+
92
+ ## Dataset Structure
93
+
94
+ ### Data Instances
95
+
96
+ ```
97
+ {
98
+ {'document_id': 'nw/wsj/23/wsj_2311',
99
+ 'sentences': [{'part_id': 0,
100
+ 'words': ['CONCORDE', 'trans-Atlantic', 'flights', 'are', '$', '2, 'to', 'Paris', 'and', '$', '3, 'to', 'London', '.']},
101
+ 'pos_tags': [25, 18, 27, 43, 2, 12, 17, 25, 11, 2, 12, 17, 25, 7],
102
+ 'parse_tree': '(TOP(S(NP (NNP CONCORDE) (JJ trans-Atlantic) (NNS flights) )(VP (VBP are) (NP(NP(NP ($ $) (CD 2,400) )(PP (IN to) (NP (NNP Paris) ))) (CC and) (NP(NP ($ $) (CD 3,200) )(PP (IN to) (NP (NNP London) ))))) (. .) ))',
103
+ 'predicate_lemmas': [None, None, None, 'be', None, None, None, None, None, None, None, None, None, None],
104
+ 'predicate_framenet_ids': [None, None, None, '01', None, None, None, None, None, None, None, None, None, None],
105
+ 'word_senses': [None, None, None, None, None, None, None, None, None, None, None, None, None, None],
106
+ 'speaker': None,
107
+ 'named_entities': [7, 6, 0, 0, 0, 15, 0, 5, 0, 0, 15, 0, 5, 0],
108
+ 'srl_frames': [{'frames': ['B-ARG1', 'I-ARG1', 'I-ARG1', 'B-V', 'B-ARG2', 'I-ARG2', 'I-ARG2', 'I-ARG2', 'I-ARG2', 'I-ARG2', 'I-ARG2', 'I-ARG2', 'I-ARG2', 'O'],
109
+ 'verb': 'are'}],
110
+ 'coref_spans': [],
111
+ {'part_id': 0,
112
+ 'words': ['In', 'a', 'Centennial', 'Journal', 'article', 'Oct.', '5', ',', 'the', 'fares', 'were', 'reversed', '.']}]}
113
+ 'pos_tags': [17, 13, 25, 25, 24, 25, 12, 4, 13, 27, 40, 42, 7],
114
+ 'parse_tree': '(TOP(S(PP (IN In) (NP (DT a) (NML (NNP Centennial) (NNP Journal) ) (NN article) ))(NP (NNP Oct.) (CD 5) ) (, ,) (NP (DT the) (NNS fares) )(VP (VBD were) (VP (VBN reversed) )) (. .) ))',
115
+ 'predicate_lemmas': [None, None, None, None, None, None, None, None, None, None, None, 'reverse', None],
116
+ 'predicate_framenet_ids': [None, None, None, None, None, None, None, None, None, None, None, '01', None],
117
+ 'word_senses': [None, None, None, None, None, None, None, None, None, None, None, None, None],
118
+ 'speaker': None,
119
+ 'named_entities': [0, 0, 4, 22, 0, 12, 30, 0, 0, 0, 0, 0, 0],
120
+ 'srl_frames': [{'frames': ['B-ARGM-LOC', 'I-ARGM-LOC', 'I-ARGM-LOC', 'I-ARGM-LOC', 'I-ARGM-LOC', 'B-ARGM-TMP', 'I-ARGM-TMP', 'O', 'B-ARG1', 'I-ARG1', 'O', 'B-V', 'O'],
121
+ 'verb': 'reversed'}],
122
+ 'coref_spans': [],
123
+ }
124
+ ```
125
+
126
+ ### Data Fields
127
+
128
+ - **`document_id`** (*`str`*): This is a variation on the document filename
129
+ - **`sentences`** (*`List[Dict]`*): All sentences of the same document are in a single example for the convenience of concatenating sentences.
130
+
131
+ Every element in `sentences` is a *`Dict`* composed of the following data fields:
132
+ - **`part_id`** (*`int`*) : Some files are divided into multiple parts numbered as 000, 001, 002, ... etc.
133
+ - **`words`** (*`List[str]`*) :
134
+ - **`pos_tags`** (*`List[ClassLabel]` or `List[str]`*) : This is the Penn-Treebank-style part of speech. When parse information is missing, all parts of speech except the one for which there is some sense or proposition annotation are marked with a XX tag. The verb is marked with just a VERB tag.
135
+ - tag set : Note tag sets below are founded by scanning all the data, and I found it seems to be a little bit different from officially stated tag sets. See official documents in the [Mendeley repo](https://data.mendeley.com/datasets/zmycy7t9h9)
136
+ - arabic : str. Because pos tag in Arabic is compounded and complex, hard to represent it by `ClassLabel`
137
+ - chinese v4 : `datasets.ClassLabel(num_classes=36, names=["X", "AD", "AS", "BA", "CC", "CD", "CS", "DEC", "DEG", "DER", "DEV", "DT", "ETC", "FW", "IJ", "INF", "JJ", "LB", "LC", "M", "MSP", "NN", "NR", "NT", "OD", "ON", "P", "PN", "PU", "SB", "SP", "URL", "VA", "VC", "VE", "VV",])`, where `X` is for pos tag missing
138
+ - english v4 : `datasets.ClassLabel(num_classes=49, names=["XX", "``", "$", "''", ",", "-LRB-", "-RRB-", ".", ":", "ADD", "AFX", "CC", "CD", "DT", "EX", "FW", "HYPH", "IN", "JJ", "JJR", "JJS", "LS", "MD", "NFP", "NN", "NNP", "NNPS", "NNS", "PDT", "POS", "PRP", "PRP$", "RB", "RBR", "RBS", "RP", "SYM", "TO", "UH", "VB", "VBD", "VBG", "VBN", "VBP", "VBZ", "WDT", "WP", "WP$", "WRB",])`, where `XX` is for pos tag missing, and `-LRB-`/`-RRB-` is "`(`" / "`)`".
139
+ - english v12 : `datasets.ClassLabel(num_classes=51, names="english_v12": ["XX", "``", "$", "''", "*", ",", "-LRB-", "-RRB-", ".", ":", "ADD", "AFX", "CC", "CD", "DT", "EX", "FW", "HYPH", "IN", "JJ", "JJR", "JJS", "LS", "MD", "NFP", "NN", "NNP", "NNPS", "NNS", "PDT", "POS", "PRP", "PRP$", "RB", "RBR", "RBS", "RP", "SYM", "TO", "UH", "VB", "VBD", "VBG", "VBN", "VBP", "VBZ", "VERB", "WDT", "WP", "WP$", "WRB",])`, where `XX` is for pos tag missing, and `-LRB-`/`-RRB-` is "`(`" / "`)`".
140
+ - **`parse_tree`** (*`Optional[str]`*) : An serialized NLTK Tree representing the parse. It includes POS tags as pre-terminal nodes. When the parse information is missing, the parse will be `None`.
141
+ - **`predicate_lemmas`** (*`List[Optional[str]]`*) : The predicate lemma of the words for which we have semantic role information or word sense information. All other indices are `None`.
142
+ - **`predicate_framenet_ids`** (*`List[Optional[int]]`*) : The PropBank frameset ID of the lemmas in predicate_lemmas, or `None`.
143
+ - **`word_senses`** (*`List[Optional[float]]`*) : The word senses for the words in the sentence, or None. These are floats because the word sense can have values after the decimal, like 1.1.
144
+ - **`speaker`** (*`Optional[str]`*) : This is the speaker or author name where available. Mostly in Broadcast Conversation and Web Log data. When it is not available, it will be `None`.
145
+ - **`named_entities`** (*`List[ClassLabel]`*) : The BIO tags for named entities in the sentence.
146
+ - tag set : `datasets.ClassLabel(num_classes=37, names=["O", "B-PERSON", "I-PERSON", "B-NORP", "I-NORP", "B-FAC", "I-FAC", "B-ORG", "I-ORG", "B-GPE", "I-GPE", "B-LOC", "I-LOC", "B-PRODUCT", "I-PRODUCT", "B-DATE", "I-DATE", "B-TIME", "I-TIME", "B-PERCENT", "I-PERCENT", "B-MONEY", "I-MONEY", "B-QUANTITY", "I-QUANTITY", "B-ORDINAL", "I-ORDINAL", "B-CARDINAL", "I-CARDINAL", "B-EVENT", "I-EVENT", "B-WORK_OF_ART", "I-WORK_OF_ART", "B-LAW", "I-LAW", "B-LANGUAGE", "I-LANGUAGE",])`
147
+ - **`srl_frames`** (*`List[{"word":str, "frames":List[str]}]`*) : A dictionary keyed by the verb in the sentence for the given Propbank frame labels, in a BIO format.
148
+ - **`coref spans`** (*`List[List[int]]`*) : The spans for entity mentions involved in coreference resolution within the sentence. Each element is a tuple composed of (cluster_id, start_index, end_index). Indices are inclusive.
149
+
150
+ ### Data Splits
151
+
152
+ Each dataset (arabic_v4, chinese_v4, english_v4, english_v12) has 3 splits: _train_, _validation_, and _test_
153
+
154
+ ## Dataset Creation
155
+
156
+ ### Curation Rationale
157
+
158
+ [More Information Needed]
159
+
160
+ ### Source Data
161
+
162
+ #### Initial Data Collection and Normalization
163
+
164
+ [More Information Needed]
165
+
166
+ #### Who are the source language producers?
167
+
168
+ [More Information Needed]
169
+
170
+ ### Annotations
171
+
172
+ #### Annotation process
173
+
174
+ [More Information Needed]
175
+
176
+ #### Who are the annotators?
177
+
178
+ [More Information Needed]
179
+
180
+ ### Personal and Sensitive Information
181
+
182
+ [More Information Needed]
183
+
184
+ ## Considerations for Using the Data
185
+
186
+ ### Social Impact of Dataset
187
+
188
+ [More Information Needed]
189
+
190
+ ### Discussion of Biases
191
+
192
+ [More Information Needed]
193
+
194
+ ### Other Known Limitations
195
+
196
+ [More Information Needed]
197
+
198
+ ## Additional Information
199
+
200
+ ### Dataset Curators
201
+
202
+ [More Information Needed]
203
+
204
+ ### Licensing Information
205
+
206
+ [More Information Needed]
207
+
208
+ ### Citation Information
209
+
210
+ ```
211
+ @inproceedings{pradhan-etal-2013-towards,
212
+ title = "Towards Robust Linguistic Analysis using {O}nto{N}otes",
213
+ author = {Pradhan, Sameer and
214
+ Moschitti, Alessandro and
215
+ Xue, Nianwen and
216
+ Ng, Hwee Tou and
217
+ Bj{\"o}rkelund, Anders and
218
+ Uryupina, Olga and
219
+ Zhang, Yuchen and
220
+ Zhong, Zhi},
221
+ booktitle = "Proceedings of the Seventeenth Conference on Computational Natural Language Learning",
222
+ month = aug,
223
+ year = "2013",
224
+ address = "Sofia, Bulgaria",
225
+ publisher = "Association for Computational Linguistics",
226
+ url = "https://aclanthology.org/W13-3516",
227
+ pages = "143--152",
228
+ }
229
+ ```
230
+
231
+ ### Contributions
232
+
233
+ Thanks to [@richarddwang](https://github.com/richarddwang) for adding this dataset.
conll2012_ontonotesv5.py ADDED
@@ -0,0 +1,819 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """CoNLL2012 shared task data based on OntoNotes 5.0"""
16
+
17
+ import os
18
+ from collections import defaultdict
19
+ from glob import glob
20
+ from typing import DefaultDict, Iterator, List, Optional, Tuple
21
+
22
+ import datasets
23
+
24
+
25
+ _CITATION = """\
26
+ @inproceedings{pradhan-etal-2013-towards,
27
+ title = "Towards Robust Linguistic Analysis using {O}nto{N}otes",
28
+ author = {Pradhan, Sameer and
29
+ Moschitti, Alessandro and
30
+ Xue, Nianwen and
31
+ Ng, Hwee Tou and
32
+ Bj{\"o}rkelund, Anders and
33
+ Uryupina, Olga and
34
+ Zhang, Yuchen and
35
+ Zhong, Zhi},
36
+ booktitle = "Proceedings of the Seventeenth Conference on Computational Natural Language Learning",
37
+ month = aug,
38
+ year = "2013",
39
+ address = "Sofia, Bulgaria",
40
+ publisher = "Association for Computational Linguistics",
41
+ url = "https://aclanthology.org/W13-3516",
42
+ pages = "143--152",
43
+ }
44
+
45
+ Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, \
46
+ Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, \
47
+ Mohammed El-Bachouti, Robert Belvin, Ann Houston. \
48
+ OntoNotes Release 5.0 LDC2013T19. \
49
+ Web Download. Philadelphia: Linguistic Data Consortium, 2013.
50
+ """
51
+
52
+ _DESCRIPTION = """\
53
+ OntoNotes v5.0 is the final version of OntoNotes corpus, and is a large-scale, multi-genre,
54
+ multilingual corpus manually annotated with syntactic, semantic and discourse information.
55
+
56
+ This dataset is the version of OntoNotes v5.0 extended and is used in the CoNLL-2012 shared task.
57
+ It includes v4 train/dev and v9 test data for English/Chinese/Arabic and corrected version v12 train/dev/test data (English only).
58
+
59
+ The source of data is the Mendeley Data repo [ontonotes-conll2012](https://data.mendeley.com/datasets/zmycy7t9h9), which seems to be as the same as the official data, but users should use this dataset on their own responsibility.
60
+
61
+ See also summaries from paperwithcode, [OntoNotes 5.0](https://paperswithcode.com/dataset/ontonotes-5-0) and [CoNLL-2012](https://paperswithcode.com/dataset/conll-2012-1)
62
+
63
+ For more detailed info of the dataset like annotation, tag set, etc., you can refer to the documents in the Mendeley repo mentioned above.
64
+ """
65
+
66
+ _URL = "https://md-datasets-cache-zipfiles-prod.s3.eu-west-1.amazonaws.com/zmycy7t9h9-1.zip"
67
+
68
+
69
+ class Conll2012Ontonotesv5Config(datasets.BuilderConfig):
70
+ """BuilderConfig for the CoNLL formatted OntoNotes dataset."""
71
+
72
+ def __init__(self, language=None, conll_version=None, **kwargs):
73
+ """BuilderConfig for the CoNLL formatted OntoNotes dataset.
74
+
75
+ Args:
76
+ language: string, one of the language {"english", "chinese", "arabic"} .
77
+ conll_version: string, "v4" or "v12". Note there is only English v12.
78
+ **kwargs: keyword arguments forwarded to super.
79
+ """
80
+ assert language in ["english", "chinese", "arabic"]
81
+ assert conll_version in ["v4", "v12"]
82
+ if conll_version == "v12":
83
+ assert language == "english"
84
+ super(Conll2012Ontonotesv5Config, self).__init__(
85
+ name=f"{language}_{conll_version}",
86
+ description=f"{conll_version} of CoNLL formatted OntoNotes dataset for {language}.",
87
+ version=datasets.Version("1.0.0"), # hf dataset script version
88
+ **kwargs,
89
+ )
90
+ self.language = language
91
+ self.conll_version = conll_version
92
+
93
+
94
+ class Conll2012Ontonotesv5(datasets.GeneratorBasedBuilder):
95
+ """The CoNLL formatted OntoNotes dataset."""
96
+
97
+ BUILDER_CONFIGS = [
98
+ Conll2012Ontonotesv5Config(
99
+ language=lang,
100
+ conll_version="v4",
101
+ )
102
+ for lang in ["english", "chinese", "arabic"]
103
+ ] + [
104
+ Conll2012Ontonotesv5Config(
105
+ language="english",
106
+ conll_version="v12",
107
+ )
108
+ ]
109
+
110
+ def _info(self):
111
+ lang = self.config.language
112
+ conll_version = self.config.conll_version
113
+ if lang == "arabic":
114
+ pos_tag_feature = datasets.Value("string")
115
+ else:
116
+ tag_set = _POS_TAGS[f"{lang}_{conll_version}"]
117
+ pos_tag_feature = datasets.ClassLabel(num_classes=len(tag_set), names=tag_set)
118
+
119
+ return datasets.DatasetInfo(
120
+ description=_DESCRIPTION,
121
+ features=datasets.Features(
122
+ {
123
+ "document_id": datasets.Value("string"),
124
+ "sentences": [
125
+ {
126
+ "part_id": datasets.Value("int32"),
127
+ "words": datasets.Sequence(datasets.Value("string")),
128
+ "pos_tags": datasets.Sequence(pos_tag_feature),
129
+ "parse_tree": datasets.Value("string"),
130
+ "predicate_lemmas": datasets.Sequence(datasets.Value("string")),
131
+ "predicate_framenet_ids": datasets.Sequence(datasets.Value("string")),
132
+ "word_senses": datasets.Sequence(datasets.Value("float32")),
133
+ "speaker": datasets.Value("string"),
134
+ "named_entities": datasets.Sequence(
135
+ datasets.ClassLabel(num_classes=37, names=_NAMED_ENTITY_TAGS)
136
+ ),
137
+ "srl_frames": [
138
+ {
139
+ "verb": datasets.Value("string"),
140
+ "frames": datasets.Sequence(datasets.Value("string")),
141
+ }
142
+ ],
143
+ "coref_spans": datasets.Sequence(datasets.Sequence(datasets.Value("int32"), length=3)),
144
+ }
145
+ ],
146
+ }
147
+ ),
148
+ homepage="https://conll.cemantix.org/2012/introduction.html",
149
+ citation=_CITATION,
150
+ )
151
+
152
+ def _split_generators(self, dl_manager):
153
+ lang = self.config.language
154
+ conll_version = self.config.conll_version
155
+ dl_dir = dl_manager.download_and_extract(_URL)
156
+ data_zip = glob(os.path.join(dl_dir, "**/conll-2012*"), recursive=True)[0]
157
+ ext_dir = dl_manager.extract(data_zip)
158
+ data_dir = os.path.join(ext_dir, f"conll-2012/{conll_version}/data")
159
+
160
+ return [
161
+ datasets.SplitGenerator(
162
+ name=datasets.Split.TRAIN,
163
+ gen_kwargs={"conll_files_directory": os.path.join(data_dir, f"train/data/{lang}")},
164
+ ),
165
+ datasets.SplitGenerator(
166
+ name=datasets.Split.VALIDATION,
167
+ gen_kwargs={"conll_files_directory": os.path.join(data_dir, f"development/data/{lang}")},
168
+ ),
169
+ datasets.SplitGenerator(
170
+ name=datasets.Split.TEST,
171
+ gen_kwargs={"conll_files_directory": os.path.join(data_dir, f"test/data/{lang}")},
172
+ ),
173
+ ]
174
+
175
+ def _generate_examples(self, conll_files_directory):
176
+ """Yields examples."""
177
+ conll_files = sorted(glob(os.path.join(conll_files_directory, "**/*gold_conll"), recursive=True))
178
+ for idx, conll_file in enumerate(conll_files):
179
+ sentences = []
180
+ for sent in Ontonotes().sentence_iterator(conll_file):
181
+ document_id = sent.document_id
182
+ sentences.append(
183
+ {
184
+ "part_id": sent.sentence_id, # should be part id, according to https://conll.cemantix.org/2012/data.html
185
+ "words": sent.words,
186
+ "pos_tags": sent.pos_tags,
187
+ "parse_tree": sent.parse_tree,
188
+ "predicate_lemmas": sent.predicate_lemmas,
189
+ "predicate_framenet_ids": sent.predicate_framenet_ids,
190
+ "word_senses": sent.word_senses,
191
+ "speaker": sent.speakers[0],
192
+ "named_entities": sent.named_entities,
193
+ "srl_frames": [{"verb": f[0], "frames": f[1]} for f in sent.srl_frames],
194
+ "coref_spans": [(c[0], *c[1]) for c in sent.coref_spans],
195
+ }
196
+ )
197
+ yield idx, {"document_id": document_id, "sentences": sentences}
198
+
199
+
200
+ # --------------------------------------------------------------------------------------------------------
201
+ # Tag set
202
+ _NAMED_ENTITY_TAGS = [
203
+ "O", # out of named entity
204
+ "B-PERSON",
205
+ "I-PERSON",
206
+ "B-NORP",
207
+ "I-NORP",
208
+ "B-FAC", # FACILITY
209
+ "I-FAC",
210
+ "B-ORG", # ORGANIZATION
211
+ "I-ORG",
212
+ "B-GPE",
213
+ "I-GPE",
214
+ "B-LOC",
215
+ "I-LOC",
216
+ "B-PRODUCT",
217
+ "I-PRODUCT",
218
+ "B-DATE",
219
+ "I-DATE",
220
+ "B-TIME",
221
+ "I-TIME",
222
+ "B-PERCENT",
223
+ "I-PERCENT",
224
+ "B-MONEY",
225
+ "I-MONEY",
226
+ "B-QUANTITY",
227
+ "I-QUANTITY",
228
+ "B-ORDINAL",
229
+ "I-ORDINAL",
230
+ "B-CARDINAL",
231
+ "I-CARDINAL",
232
+ "B-EVENT",
233
+ "I-EVENT",
234
+ "B-WORK_OF_ART",
235
+ "I-WORK_OF_ART",
236
+ "B-LAW",
237
+ "I-LAW",
238
+ "B-LANGUAGE",
239
+ "I-LANGUAGE",
240
+ ]
241
+
242
+ _POS_TAGS = {
243
+ "english_v4": [
244
+ "XX", # missing
245
+ "``",
246
+ "$",
247
+ "''",
248
+ ",",
249
+ "-LRB-", # (
250
+ "-RRB-", # )
251
+ ".",
252
+ ":",
253
+ "ADD",
254
+ "AFX",
255
+ "CC",
256
+ "CD",
257
+ "DT",
258
+ "EX",
259
+ "FW",
260
+ "HYPH",
261
+ "IN",
262
+ "JJ",
263
+ "JJR",
264
+ "JJS",
265
+ "LS",
266
+ "MD",
267
+ "NFP",
268
+ "NN",
269
+ "NNP",
270
+ "NNPS",
271
+ "NNS",
272
+ "PDT",
273
+ "POS",
274
+ "PRP",
275
+ "PRP$",
276
+ "RB",
277
+ "RBR",
278
+ "RBS",
279
+ "RP",
280
+ "SYM",
281
+ "TO",
282
+ "UH",
283
+ "VB",
284
+ "VBD",
285
+ "VBG",
286
+ "VBN",
287
+ "VBP",
288
+ "VBZ",
289
+ "WDT",
290
+ "WP",
291
+ "WP$",
292
+ "WRB",
293
+ ], # 49
294
+ "english_v12": [
295
+ "XX", # misssing
296
+ "``",
297
+ "$",
298
+ "''",
299
+ "*",
300
+ ",",
301
+ "-LRB-", # (
302
+ "-RRB-", # )
303
+ ".",
304
+ ":",
305
+ "ADD",
306
+ "AFX",
307
+ "CC",
308
+ "CD",
309
+ "DT",
310
+ "EX",
311
+ "FW",
312
+ "HYPH",
313
+ "IN",
314
+ "JJ",
315
+ "JJR",
316
+ "JJS",
317
+ "LS",
318
+ "MD",
319
+ "NFP",
320
+ "NN",
321
+ "NNP",
322
+ "NNPS",
323
+ "NNS",
324
+ "PDT",
325
+ "POS",
326
+ "PRP",
327
+ "PRP$",
328
+ "RB",
329
+ "RBR",
330
+ "RBS",
331
+ "RP",
332
+ "SYM",
333
+ "TO",
334
+ "UH",
335
+ "VB",
336
+ "VBD",
337
+ "VBG",
338
+ "VBN",
339
+ "VBP",
340
+ "VBZ",
341
+ "VERB",
342
+ "WDT",
343
+ "WP",
344
+ "WP$",
345
+ "WRB",
346
+ ], # 51
347
+ "chinese_v4": [
348
+ "X", # missing
349
+ "AD",
350
+ "AS",
351
+ "BA",
352
+ "CC",
353
+ "CD",
354
+ "CS",
355
+ "DEC",
356
+ "DEG",
357
+ "DER",
358
+ "DEV",
359
+ "DT",
360
+ "ETC",
361
+ "FW",
362
+ "IJ",
363
+ "INF",
364
+ "JJ",
365
+ "LB",
366
+ "LC",
367
+ "M",
368
+ "MSP",
369
+ "NN",
370
+ "NR",
371
+ "NT",
372
+ "OD",
373
+ "ON",
374
+ "P",
375
+ "PN",
376
+ "PU",
377
+ "SB",
378
+ "SP",
379
+ "URL",
380
+ "VA",
381
+ "VC",
382
+ "VE",
383
+ "VV",
384
+ ], # 36
385
+ }
386
+
387
+ # --------------------------------------------------------------------------------------------------------
388
+ # The CoNLL(2012) file reader
389
+ # Modified the original code to get rid of extra package dependency.
390
+ # Original code: https://github.com/allenai/allennlp-models/blob/main/allennlp_models/common/ontonotes.py
391
+
392
+
393
+ class OntonotesSentence:
394
+ """
395
+ A class representing the annotations available for a single CONLL formatted sentence.
396
+ # Parameters
397
+ document_id : `str`
398
+ This is a variation on the document filename
399
+ sentence_id : `int`
400
+ The integer ID of the sentence within a document.
401
+ words : `List[str]`
402
+ This is the tokens as segmented/tokenized in the bank.
403
+ pos_tags : `List[str]`
404
+ This is the Penn-Treebank-style part of speech. When parse information is missing,
405
+ all parts of speech except the one for which there is some sense or proposition
406
+ annotation are marked with a XX tag. The verb is marked with just a VERB tag.
407
+ parse_tree : `nltk.Tree`
408
+ An nltk Tree representing the parse. It includes POS tags as pre-terminal nodes.
409
+ When the parse information is missing, the parse will be `None`.
410
+ predicate_lemmas : `List[Optional[str]]`
411
+ The predicate lemma of the words for which we have semantic role
412
+ information or word sense information. All other indices are `None`.
413
+ predicate_framenet_ids : `List[Optional[int]]`
414
+ The PropBank frameset ID of the lemmas in `predicate_lemmas`, or `None`.
415
+ word_senses : `List[Optional[float]]`
416
+ The word senses for the words in the sentence, or `None`. These are floats
417
+ because the word sense can have values after the decimal, like `1.1`.
418
+ speakers : `List[Optional[str]]`
419
+ The speaker information for the words in the sentence, if present, or `None`
420
+ This is the speaker or author name where available. Mostly in Broadcast Conversation
421
+ and Web Log data. When not available the rows are marked with an "-".
422
+ named_entities : `List[str]`
423
+ The BIO tags for named entities in the sentence.
424
+ srl_frames : `List[Tuple[str, List[str]]]`
425
+ A dictionary keyed by the verb in the sentence for the given
426
+ Propbank frame labels, in a BIO format.
427
+ coref_spans : `Set[TypedSpan]`
428
+ The spans for entity mentions involved in coreference resolution within the sentence.
429
+ Each element is a tuple composed of (cluster_id, (start_index, end_index)). Indices
430
+ are `inclusive`.
431
+ """
432
+
433
+ def __init__(
434
+ self,
435
+ document_id: str,
436
+ sentence_id: int,
437
+ words: List[str],
438
+ pos_tags: List[str],
439
+ parse_tree: Optional[str],
440
+ predicate_lemmas: List[Optional[str]],
441
+ predicate_framenet_ids: List[Optional[str]],
442
+ word_senses: List[Optional[float]],
443
+ speakers: List[Optional[str]],
444
+ named_entities: List[str],
445
+ srl_frames: List[Tuple[str, List[str]]],
446
+ coref_spans,
447
+ ) -> None:
448
+
449
+ self.document_id = document_id
450
+ self.sentence_id = sentence_id
451
+ self.words = words
452
+ self.pos_tags = pos_tags
453
+ self.parse_tree = parse_tree
454
+ self.predicate_lemmas = predicate_lemmas
455
+ self.predicate_framenet_ids = predicate_framenet_ids
456
+ self.word_senses = word_senses
457
+ self.speakers = speakers
458
+ self.named_entities = named_entities
459
+ self.srl_frames = srl_frames
460
+ self.coref_spans = coref_spans
461
+
462
+
463
+ class Ontonotes:
464
+ """
465
+ This `DatasetReader` is designed to read in the English OntoNotes v5.0 data
466
+ in the format used by the CoNLL 2011/2012 shared tasks. In order to use this
467
+ Reader, you must follow the instructions provided [here (v12 release):]
468
+ (https://cemantix.org/data/ontonotes.html), which will allow you to download
469
+ the CoNLL style annotations for the OntoNotes v5.0 release -- LDC2013T19.tgz
470
+ obtained from LDC.
471
+ Once you have run the scripts on the extracted data, you will have a folder
472
+ structured as follows:
473
+ ```
474
+ conll-formatted-ontonotes-5.0/
475
+ ── data
476
+ ├── development
477
+ └── data
478
+ └── english
479
+ └── annotations
480
+ ├── bc
481
+ ├── bn
482
+ ├── mz
483
+ ├── nw
484
+ ├── pt
485
+ ├── tc
486
+ └── wb
487
+ ├── test
488
+ └── data
489
+ └── english
490
+ └── annotations
491
+ ├── bc
492
+ ├── bn
493
+ ├── mz
494
+ ├── nw
495
+ ├── pt
496
+ ├── tc
497
+ └── wb
498
+ └── train
499
+ └── data
500
+ └── english
501
+ └── annotations
502
+ ├── bc
503
+ ├── bn
504
+ ├── mz
505
+ ├── nw
506
+ ├── pt
507
+ ├── tc
508
+ └── wb
509
+ ```
510
+ The file path provided to this class can then be any of the train, test or development
511
+ directories(or the top level data directory, if you are not utilizing the splits).
512
+ The data has the following format, ordered by column.
513
+ 1. Document ID : `str`
514
+ This is a variation on the document filename
515
+ 2. Part number : `int`
516
+ Some files are divided into multiple parts numbered as 000, 001, 002, ... etc.
517
+ 3. Word number : `int`
518
+ This is the word index of the word in that sentence.
519
+ 4. Word : `str`
520
+ This is the token as segmented/tokenized in the Treebank. Initially the `*_skel` file
521
+ contain the placeholder [WORD] which gets replaced by the actual token from the
522
+ Treebank which is part of the OntoNotes release.
523
+ 5. POS Tag : `str`
524
+ This is the Penn Treebank style part of speech. When parse information is missing,
525
+ all part of speeches except the one for which there is some sense or proposition
526
+ annotation are marked with a XX tag. The verb is marked with just a VERB tag.
527
+ 6. Parse bit : `str`
528
+ This is the bracketed structure broken before the first open parenthesis in the parse,
529
+ and the word/part-of-speech leaf replaced with a `*`. When the parse information is
530
+ missing, the first word of a sentence is tagged as `(TOP*` and the last word is tagged
531
+ as `*)` and all intermediate words are tagged with a `*`.
532
+ 7. Predicate lemma : `str`
533
+ The predicate lemma is mentioned for the rows for which we have semantic role
534
+ information or word sense information. All other rows are marked with a "-".
535
+ 8. Predicate Frameset ID : `int`
536
+ The PropBank frameset ID of the predicate in Column 7.
537
+ 9. Word sense : `float`
538
+ This is the word sense of the word in Column 3.
539
+ 10. Speaker/Author : `str`
540
+ This is the speaker or author name where available. Mostly in Broadcast Conversation
541
+ and Web Log data. When not available the rows are marked with an "-".
542
+ 11. Named Entities : `str`
543
+ These columns identifies the spans representing various named entities. For documents
544
+ which do not have named entity annotation, each line is represented with an `*`.
545
+ 12. Predicate Arguments : `str`
546
+ There is one column each of predicate argument structure information for the predicate
547
+ mentioned in Column 7. If there are no predicates tagged in a sentence this is a
548
+ single column with all rows marked with an `*`.
549
+ -1. Co-reference : `str`
550
+ Co-reference chain information encoded in a parenthesis structure. For documents that do
551
+ not have co-reference annotations, each line is represented with a "-".
552
+ """
553
+
554
+ def dataset_iterator(self, file_path: str) -> Iterator[OntonotesSentence]:
555
+ """
556
+ An iterator over the entire dataset, yielding all sentences processed.
557
+ """
558
+ for conll_file in self.dataset_path_iterator(file_path):
559
+ yield from self.sentence_iterator(conll_file)
560
+
561
+ @staticmethod
562
+ def dataset_path_iterator(file_path: str) -> Iterator[str]:
563
+ """
564
+ An iterator returning file_paths in a directory
565
+ containing CONLL-formatted files.
566
+ """
567
+ for root, _, files in list(os.walk(file_path)):
568
+ for data_file in sorted(files):
569
+ # These are a relic of the dataset pre-processing. Every
570
+ # file will be duplicated - one file called filename.gold_skel
571
+ # and one generated from the preprocessing called filename.gold_conll.
572
+ if not data_file.endswith("gold_conll"):
573
+ continue
574
+
575
+ yield os.path.join(root, data_file)
576
+
577
+ def dataset_document_iterator(self, file_path: str) -> Iterator[List[OntonotesSentence]]:
578
+ """
579
+ An iterator over CONLL formatted files which yields documents, regardless
580
+ of the number of document annotations in a particular file. This is useful
581
+ for conll data which has been preprocessed, such as the preprocessing which
582
+ takes place for the 2012 CONLL Coreference Resolution task.
583
+ """
584
+ with open(file_path, "r", encoding="utf8") as open_file:
585
+ conll_rows = []
586
+ document: List[OntonotesSentence] = []
587
+ for line in open_file:
588
+ line = line.strip()
589
+ if line != "" and not line.startswith("#"):
590
+ # Non-empty line. Collect the annotation.
591
+ conll_rows.append(line)
592
+ else:
593
+ if conll_rows:
594
+ document.append(self._conll_rows_to_sentence(conll_rows))
595
+ conll_rows = []
596
+ if line.startswith("#end document"):
597
+ yield document
598
+ document = []
599
+ if document:
600
+ # Collect any stragglers or files which might not
601
+ # have the '#end document' format for the end of the file.
602
+ yield document
603
+
604
+ def sentence_iterator(self, file_path: str) -> Iterator[OntonotesSentence]:
605
+ """
606
+ An iterator over the sentences in an individual CONLL formatted file.
607
+ """
608
+ for document in self.dataset_document_iterator(file_path):
609
+ for sentence in document:
610
+ yield sentence
611
+
612
+ def _conll_rows_to_sentence(self, conll_rows: List[str]) -> OntonotesSentence:
613
+ document_id: str = None
614
+ sentence_id: int = None
615
+ # The words in the sentence.
616
+ sentence: List[str] = []
617
+ # The pos tags of the words in the sentence.
618
+ pos_tags: List[str] = []
619
+ # the pieces of the parse tree.
620
+ parse_pieces: List[str] = []
621
+ # The lemmatised form of the words in the sentence which
622
+ # have SRL or word sense information.
623
+ predicate_lemmas: List[str] = []
624
+ # The FrameNet ID of the predicate.
625
+ predicate_framenet_ids: List[str] = []
626
+ # The sense of the word, if available.
627
+ word_senses: List[float] = []
628
+ # The current speaker, if available.
629
+ speakers: List[str] = []
630
+
631
+ verbal_predicates: List[str] = []
632
+ span_labels: List[List[str]] = []
633
+ current_span_labels: List[str] = []
634
+
635
+ # Cluster id -> List of (start_index, end_index) spans.
636
+ clusters: DefaultDict[int, List[Tuple[int, int]]] = defaultdict(list)
637
+ # Cluster id -> List of start_indices which are open for this id.
638
+ coref_stacks: DefaultDict[int, List[int]] = defaultdict(list)
639
+
640
+ for index, row in enumerate(conll_rows):
641
+ conll_components = row.split()
642
+
643
+ document_id = conll_components[0]
644
+ sentence_id = int(conll_components[1])
645
+ word = conll_components[3]
646
+ pos_tag = conll_components[4]
647
+ parse_piece = conll_components[5]
648
+
649
+ # Replace brackets in text and pos tags
650
+ # with a different token for parse trees.
651
+ if pos_tag != "XX" and word != "XX":
652
+ if word == "(":
653
+ parse_word = "-LRB-"
654
+ elif word == ")":
655
+ parse_word = "-RRB-"
656
+ else:
657
+ parse_word = word
658
+ if pos_tag == "(":
659
+ pos_tag = "-LRB-"
660
+ if pos_tag == ")":
661
+ pos_tag = "-RRB-"
662
+ (left_brackets, right_hand_side) = parse_piece.split("*")
663
+ # only keep ')' if there are nested brackets with nothing in them.
664
+ right_brackets = right_hand_side.count(")") * ")"
665
+ parse_piece = f"{left_brackets} ({pos_tag} {parse_word}) {right_brackets}"
666
+ else:
667
+ # There are some bad annotations in the CONLL data.
668
+ # They contain no information, so to make this explicit,
669
+ # we just set the parse piece to be None which will result
670
+ # in the overall parse tree being None.
671
+ parse_piece = None
672
+
673
+ lemmatised_word = conll_components[6]
674
+ framenet_id = conll_components[7]
675
+ word_sense = conll_components[8]
676
+ speaker = conll_components[9]
677
+
678
+ if not span_labels:
679
+ # If this is the first word in the sentence, create
680
+ # empty lists to collect the NER and SRL BIO labels.
681
+ # We can't do this upfront, because we don't know how many
682
+ # components we are collecting, as a sentence can have
683
+ # variable numbers of SRL frames.
684
+ span_labels = [[] for _ in conll_components[10:-1]]
685
+ # Create variables representing the current label for each label
686
+ # sequence we are collecting.
687
+ current_span_labels = [None for _ in conll_components[10:-1]]
688
+
689
+ self._process_span_annotations_for_word(conll_components[10:-1], span_labels, current_span_labels)
690
+
691
+ # If any annotation marks this word as a verb predicate,
692
+ # we need to record its index. This also has the side effect
693
+ # of ordering the verbal predicates by their location in the
694
+ # sentence, automatically aligning them with the annotations.
695
+ word_is_verbal_predicate = any("(V" in x for x in conll_components[11:-1])
696
+ if word_is_verbal_predicate:
697
+ verbal_predicates.append(word)
698
+
699
+ self._process_coref_span_annotations_for_word(conll_components[-1], index, clusters, coref_stacks)
700
+
701
+ sentence.append(word)
702
+ pos_tags.append(pos_tag)
703
+ parse_pieces.append(parse_piece)
704
+ predicate_lemmas.append(lemmatised_word if lemmatised_word != "-" else None)
705
+ predicate_framenet_ids.append(framenet_id if framenet_id != "-" else None)
706
+ word_senses.append(float(word_sense) if word_sense != "-" else None)
707
+ speakers.append(speaker if speaker != "-" else None)
708
+
709
+ named_entities = span_labels[0]
710
+ srl_frames = [(predicate, labels) for predicate, labels in zip(verbal_predicates, span_labels[1:])]
711
+
712
+ if all(parse_pieces):
713
+ parse_tree = "".join(parse_pieces)
714
+ else:
715
+ parse_tree = None
716
+ coref_span_tuples = {(cluster_id, span) for cluster_id, span_list in clusters.items() for span in span_list}
717
+ return OntonotesSentence(
718
+ document_id,
719
+ sentence_id,
720
+ sentence,
721
+ pos_tags,
722
+ parse_tree,
723
+ predicate_lemmas,
724
+ predicate_framenet_ids,
725
+ word_senses,
726
+ speakers,
727
+ named_entities,
728
+ srl_frames,
729
+ coref_span_tuples,
730
+ )
731
+
732
+ @staticmethod
733
+ def _process_coref_span_annotations_for_word(
734
+ label: str,
735
+ word_index: int,
736
+ clusters: DefaultDict[int, List[Tuple[int, int]]],
737
+ coref_stacks: DefaultDict[int, List[int]],
738
+ ) -> None:
739
+ """
740
+ For a given coref label, add it to a currently open span(s), complete a span(s) or
741
+ ignore it, if it is outside of all spans. This method mutates the clusters and coref_stacks
742
+ dictionaries.
743
+ # Parameters
744
+ label : `str`
745
+ The coref label for this word.
746
+ word_index : `int`
747
+ The word index into the sentence.
748
+ clusters : `DefaultDict[int, List[Tuple[int, int]]]`
749
+ A dictionary mapping cluster ids to lists of inclusive spans into the
750
+ sentence.
751
+ coref_stacks : `DefaultDict[int, List[int]]`
752
+ Stacks for each cluster id to hold the start indices of active spans (spans
753
+ which we are inside of when processing a given word). Spans with the same id
754
+ can be nested, which is why we collect these opening spans on a stack, e.g:
755
+ [Greg, the baker who referred to [himself]_ID1 as 'the bread man']_ID1
756
+ """
757
+ if label != "-":
758
+ for segment in label.split("|"):
759
+ # The conll representation of coref spans allows spans to
760
+ # overlap. If spans end or begin at the same word, they are
761
+ # separated by a "|".
762
+ if segment[0] == "(":
763
+ # The span begins at this word.
764
+ if segment[-1] == ")":
765
+ # The span begins and ends at this word (single word span).
766
+ cluster_id = int(segment[1:-1])
767
+ clusters[cluster_id].append((word_index, word_index))
768
+ else:
769
+ # The span is starting, so we record the index of the word.
770
+ cluster_id = int(segment[1:])
771
+ coref_stacks[cluster_id].append(word_index)
772
+ else:
773
+ # The span for this id is ending, but didn't start at this word.
774
+ # Retrieve the start index from the document state and
775
+ # add the span to the clusters for this id.
776
+ cluster_id = int(segment[:-1])
777
+ start = coref_stacks[cluster_id].pop()
778
+ clusters[cluster_id].append((start, word_index))
779
+
780
+ @staticmethod
781
+ def _process_span_annotations_for_word(
782
+ annotations: List[str],
783
+ span_labels: List[List[str]],
784
+ current_span_labels: List[Optional[str]],
785
+ ) -> None:
786
+ """
787
+ Given a sequence of different label types for a single word and the current
788
+ span label we are inside, compute the BIO tag for each label and append to a list.
789
+ # Parameters
790
+ annotations : `List[str]`
791
+ A list of labels to compute BIO tags for.
792
+ span_labels : `List[List[str]]`
793
+ A list of lists, one for each annotation, to incrementally collect
794
+ the BIO tags for a sequence.
795
+ current_span_labels : `List[Optional[str]]`
796
+ The currently open span per annotation type, or `None` if there is no open span.
797
+ """
798
+ for annotation_index, annotation in enumerate(annotations):
799
+ # strip all bracketing information to
800
+ # get the actual propbank label.
801
+ label = annotation.strip("()*")
802
+
803
+ if "(" in annotation:
804
+ # Entering into a span for a particular semantic role label.
805
+ # We append the label and set the current span for this annotation.
806
+ bio_label = "B-" + label
807
+ span_labels[annotation_index].append(bio_label)
808
+ current_span_labels[annotation_index] = label
809
+ elif current_span_labels[annotation_index] is not None:
810
+ # If there's no '(' token, but the current_span_label is not None,
811
+ # then we are inside a span.
812
+ bio_label = "I-" + current_span_labels[annotation_index]
813
+ span_labels[annotation_index].append(bio_label)
814
+ else:
815
+ # We're outside a span.
816
+ span_labels[annotation_index].append("O")
817
+ # Exiting a span, so we reset the current span label for this annotation.
818
+ if ")" in annotation:
819
+ current_span_labels[annotation_index] = None
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"english_v4": {"description": "OntoNotes v5.0 is the final version of OntoNotes corpus, and is a large-scale, multi-genre,\nmultilingual corpus manually annotated with syntactic, semantic and discourse information.\n\nThis dataset is the version of OntoNotes v5.0 extended and is used in the CoNLL-2012 shared task.\nIt includes v4 train/dev and v9 test data for English/Chinese/Arabic and corrected version v12 train/dev/test data (English only).\n\nThe source of data is the Mendeley Data repo [ontonotes-conll2012](https://data.mendeley.com/datasets/zmycy7t9h9), which seems to be as the same as the official data, but users should use this dataset on their own responsibility.\n\nSee also summaries from paperwithcode, [OntoNotes 5.0](https://paperswithcode.com/dataset/ontonotes-5-0) and [CoNLL-2012](https://paperswithcode.com/dataset/conll-2012-1)\n\nFor more detailed info of the dataset like annotation, tag set, etc., you can refer to the documents in the Mendeley repo mentioned above. \n", "citation": "@inproceedings{pradhan-etal-2013-towards,\n title = \"Towards Robust Linguistic Analysis using {O}nto{N}otes\",\n author = {Pradhan, Sameer and\n Moschitti, Alessandro and\n Xue, Nianwen and\n Ng, Hwee Tou and\n Bj{\"o}rkelund, Anders and\n Uryupina, Olga and\n Zhang, Yuchen and\n Zhong, Zhi},\n booktitle = \"Proceedings of the Seventeenth Conference on Computational Natural Language Learning\",\n month = aug,\n year = \"2013\",\n address = \"Sofia, Bulgaria\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://aclanthology.org/W13-3516\",\n pages = \"143--152\",\n}\n\nRalph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, Mohammed El-Bachouti, Robert Belvin, Ann Houston. OntoNotes Release 5.0 LDC2013T19. Web Download. Philadelphia: Linguistic Data Consortium, 2013.\n", "homepage": "https://conll.cemantix.org/2012/introduction.html", "license": "", "features": {"document_id": {"dtype": "string", "id": null, "_type": "Value"}, "sentences": [{"part_id": {"dtype": "int32", "id": null, "_type": "Value"}, "words": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "pos_tags": {"feature": {"num_classes": 49, "names": ["XX", "``", "$", "''", ",", "-LRB-", "-RRB-", ".", ":", "ADD", "AFX", "CC", "CD", "DT", "EX", "FW", "HYPH", "IN", "JJ", "JJR", "JJS", "LS", "MD", "NFP", "NN", "NNP", "NNPS", "NNS", "PDT", "POS", "PRP", "PRP$", "RB", "RBR", "RBS", "RP", "SYM", "TO", "UH", "VB", "VBD", "VBG", "VBN", "VBP", "VBZ", "WDT", "WP", "WP$", "WRB"], "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}, "parse_tree": {"dtype": "string", "id": null, "_type": "Value"}, "predicate_lemmas": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "predicate_framenet_ids": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "word_senses": {"feature": {"dtype": "float32", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "speaker": {"dtype": "string", "id": null, "_type": "Value"}, "named_entities": {"feature": {"num_classes": 37, "names": ["O", "B-PERSON", "I-PERSON", "B-NORP", "I-NORP", "B-FAC", "I-FAC", "B-ORG", "I-ORG", "B-GPE", "I-GPE", "B-LOC", "I-LOC", "B-PRODUCT", "I-PRODUCT", "B-DATE", "I-DATE", "B-TIME", "I-TIME", "B-PERCENT", "I-PERCENT", "B-MONEY", "I-MONEY", "B-QUANTITY", "I-QUANTITY", "B-ORDINAL", "I-ORDINAL", "B-CARDINAL", "I-CARDINAL", "B-EVENT", "I-EVENT", "B-WORK_OF_ART", "I-WORK_OF_ART", "B-LAW", "I-LAW", "B-LANGUAGE", "I-LANGUAGE"], "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}, "srl_frames": [{"verb": {"dtype": "string", "id": null, "_type": "Value"}, "frames": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}], "coref_spans": {"feature": {"feature": {"dtype": "int32", "id": null, "_type": "Value"}, "length": 3, "id": null, "_type": "Sequence"}, "length": -1, "id": null, "_type": "Sequence"}}]}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "conll2012_ontonotesv5", "config_name": "english_v4", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 0, "num_examples": 0, "dataset_name": "conll2012_ontonotesv5"}, "validation": {"name": "validation", "num_bytes": 0, "num_examples": 0, "dataset_name": "conll2012_ontonotesv5"}, "test": {"name": "test", "num_bytes": 0, "num_examples": 0, "dataset_name": "conll2012_ontonotesv5"}}, "download_checksums": {"https://md-datasets-cache-zipfiles-prod.s3.eu-west-1.amazonaws.com/zmycy7t9h9-1.zip": {"num_bytes": 183323987, "checksum": null}}, "download_size": 183323987, "post_processing_size": null, "dataset_size": 0, "size_in_bytes": 183323987}, "chinese_v4": {"description": "OntoNotes v5.0 is the final version of OntoNotes corpus, and is a large-scale, multi-genre,\nmultilingual corpus manually annotated with syntactic, semantic and discourse information.\n\nThis dataset is the version of OntoNotes v5.0 extended and is used in the CoNLL-2012 shared task.\nIt includes v4 train/dev and v9 test data for English/Chinese/Arabic and corrected version v12 train/dev/test data (English only).\n\nThe source of data is the Mendeley Data repo [ontonotes-conll2012](https://data.mendeley.com/datasets/zmycy7t9h9), which seems to be as the same as the official data, but users should use this dataset on their own responsibility.\n\nSee also summaries from paperwithcode, [OntoNotes 5.0](https://paperswithcode.com/dataset/ontonotes-5-0) and [CoNLL-2012](https://paperswithcode.com/dataset/conll-2012-1)\n\nFor more detailed info of the dataset like annotation, tag set, etc., you can refer to the documents in the Mendeley repo mentioned above. \n", "citation": "@inproceedings{pradhan-etal-2013-towards,\n title = \"Towards Robust Linguistic Analysis using {O}nto{N}otes\",\n author = {Pradhan, Sameer and\n Moschitti, Alessandro and\n Xue, Nianwen and\n Ng, Hwee Tou and\n Bj{\"o}rkelund, Anders and\n Uryupina, Olga and\n Zhang, Yuchen and\n Zhong, Zhi},\n booktitle = \"Proceedings of the Seventeenth Conference on Computational Natural Language Learning\",\n month = aug,\n year = \"2013\",\n address = \"Sofia, Bulgaria\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://aclanthology.org/W13-3516\",\n pages = \"143--152\",\n}\n\nRalph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, Mohammed El-Bachouti, Robert Belvin, Ann Houston. OntoNotes Release 5.0 LDC2013T19. Web Download. Philadelphia: Linguistic Data Consortium, 2013.\n", "homepage": "https://conll.cemantix.org/2012/introduction.html", "license": "", "features": {"document_id": {"dtype": "string", "id": null, "_type": "Value"}, "sentences": [{"part_id": {"dtype": "int32", "id": null, "_type": "Value"}, "words": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "pos_tags": {"feature": {"num_classes": 36, "names": ["X", "AD", "AS", "BA", "CC", "CD", "CS", "DEC", "DEG", "DER", "DEV", "DT", "ETC", "FW", "IJ", "INF", "JJ", "LB", "LC", "M", "MSP", "NN", "NR", "NT", "OD", "ON", "P", "PN", "PU", "SB", "SP", "URL", "VA", "VC", "VE", "VV"], "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}, "parse_tree": {"dtype": "string", "id": null, "_type": "Value"}, "predicate_lemmas": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "predicate_framenet_ids": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "word_senses": {"feature": {"dtype": "float32", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "speaker": {"dtype": "string", "id": null, "_type": "Value"}, "named_entities": {"feature": {"num_classes": 37, "names": ["O", "B-PERSON", "I-PERSON", "B-NORP", "I-NORP", "B-FAC", "I-FAC", "B-ORG", "I-ORG", "B-GPE", "I-GPE", "B-LOC", "I-LOC", "B-PRODUCT", "I-PRODUCT", "B-DATE", "I-DATE", "B-TIME", "I-TIME", "B-PERCENT", "I-PERCENT", "B-MONEY", "I-MONEY", "B-QUANTITY", "I-QUANTITY", "B-ORDINAL", "I-ORDINAL", "B-CARDINAL", "I-CARDINAL", "B-EVENT", "I-EVENT", "B-WORK_OF_ART", "I-WORK_OF_ART", "B-LAW", "I-LAW", "B-LANGUAGE", "I-LANGUAGE"], "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}, "srl_frames": [{"verb": {"dtype": "string", "id": null, "_type": "Value"}, "frames": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}], "coref_spans": {"feature": {"feature": {"dtype": "int32", "id": null, "_type": "Value"}, "length": 3, "id": null, "_type": "Sequence"}, "length": -1, "id": null, "_type": "Sequence"}}]}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "conll2012_ontonotesv5", "config_name": "chinese_v4", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 0, "num_examples": 0, "dataset_name": "conll2012_ontonotesv5"}, "validation": {"name": "validation", "num_bytes": 0, "num_examples": 0, "dataset_name": "conll2012_ontonotesv5"}, "test": {"name": "test", "num_bytes": 0, "num_examples": 0, "dataset_name": "conll2012_ontonotesv5"}}, "download_checksums": {"https://md-datasets-cache-zipfiles-prod.s3.eu-west-1.amazonaws.com/zmycy7t9h9-1.zip": {"num_bytes": 183323987, "checksum": null}}, "download_size": 183323987, "post_processing_size": null, "dataset_size": 0, "size_in_bytes": 183323987}, "arabic_v4": {"description": "OntoNotes v5.0 is the final version of OntoNotes corpus, and is a large-scale, multi-genre,\nmultilingual corpus manually annotated with syntactic, semantic and discourse information.\n\nThis dataset is the version of OntoNotes v5.0 extended and is used in the CoNLL-2012 shared task.\nIt includes v4 train/dev and v9 test data for English/Chinese/Arabic and corrected version v12 train/dev/test data (English only).\n\nThe source of data is the Mendeley Data repo [ontonotes-conll2012](https://data.mendeley.com/datasets/zmycy7t9h9), which seems to be as the same as the official data, but users should use this dataset on their own responsibility.\n\nSee also summaries from paperwithcode, [OntoNotes 5.0](https://paperswithcode.com/dataset/ontonotes-5-0) and [CoNLL-2012](https://paperswithcode.com/dataset/conll-2012-1)\n\nFor more detailed info of the dataset like annotation, tag set, etc., you can refer to the documents in the Mendeley repo mentioned above. \n", "citation": "@inproceedings{pradhan-etal-2013-towards,\n title = \"Towards Robust Linguistic Analysis using {O}nto{N}otes\",\n author = {Pradhan, Sameer and\n Moschitti, Alessandro and\n Xue, Nianwen and\n Ng, Hwee Tou and\n Bj{\"o}rkelund, Anders and\n Uryupina, Olga and\n Zhang, Yuchen and\n Zhong, Zhi},\n booktitle = \"Proceedings of the Seventeenth Conference on Computational Natural Language Learning\",\n month = aug,\n year = \"2013\",\n address = \"Sofia, Bulgaria\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://aclanthology.org/W13-3516\",\n pages = \"143--152\",\n}\n\nRalph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, Mohammed El-Bachouti, Robert Belvin, Ann Houston. OntoNotes Release 5.0 LDC2013T19. Web Download. Philadelphia: Linguistic Data Consortium, 2013.\n", "homepage": "https://conll.cemantix.org/2012/introduction.html", "license": "", "features": {"document_id": {"dtype": "string", "id": null, "_type": "Value"}, "sentences": [{"part_id": {"dtype": "int32", "id": null, "_type": "Value"}, "words": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "pos_tags": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "parse_tree": {"dtype": "string", "id": null, "_type": "Value"}, "predicate_lemmas": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "predicate_framenet_ids": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "word_senses": {"feature": {"dtype": "float32", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "speaker": {"dtype": "string", "id": null, "_type": "Value"}, "named_entities": {"feature": {"num_classes": 37, "names": ["O", "B-PERSON", "I-PERSON", "B-NORP", "I-NORP", "B-FAC", "I-FAC", "B-ORG", "I-ORG", "B-GPE", "I-GPE", "B-LOC", "I-LOC", "B-PRODUCT", "I-PRODUCT", "B-DATE", "I-DATE", "B-TIME", "I-TIME", "B-PERCENT", "I-PERCENT", "B-MONEY", "I-MONEY", "B-QUANTITY", "I-QUANTITY", "B-ORDINAL", "I-ORDINAL", "B-CARDINAL", "I-CARDINAL", "B-EVENT", "I-EVENT", "B-WORK_OF_ART", "I-WORK_OF_ART", "B-LAW", "I-LAW", "B-LANGUAGE", "I-LANGUAGE"], "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}, "srl_frames": [{"verb": {"dtype": "string", "id": null, "_type": "Value"}, "frames": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}], "coref_spans": {"feature": {"feature": {"dtype": "int32", "id": null, "_type": "Value"}, "length": 3, "id": null, "_type": "Sequence"}, "length": -1, "id": null, "_type": "Sequence"}}]}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "conll2012_ontonotesv5", "config_name": "arabic_v4", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 0, "num_examples": 0, "dataset_name": "conll2012_ontonotesv5"}, "validation": {"name": "validation", "num_bytes": 0, "num_examples": 0, "dataset_name": "conll2012_ontonotesv5"}, "test": {"name": "test", "num_bytes": 0, "num_examples": 0, "dataset_name": "conll2012_ontonotesv5"}}, "download_checksums": {"https://md-datasets-cache-zipfiles-prod.s3.eu-west-1.amazonaws.com/zmycy7t9h9-1.zip": {"num_bytes": 183323987, "checksum": null}}, "download_size": 183323987, "post_processing_size": null, "dataset_size": 0, "size_in_bytes": 183323987}, "english_v12": {"description": "OntoNotes v5.0 is the final version of OntoNotes corpus, and is a large-scale, multi-genre,\nmultilingual corpus manually annotated with syntactic, semantic and discourse information.\n\nThis dataset is the version of OntoNotes v5.0 extended and is used in the CoNLL-2012 shared task.\nIt includes v4 train/dev and v9 test data for English/Chinese/Arabic and corrected version v12 train/dev/test data (English only).\n\nThe source of data is the Mendeley Data repo [ontonotes-conll2012](https://data.mendeley.com/datasets/zmycy7t9h9), which seems to be as the same as the official data, but users should use this dataset on their own responsibility.\n\nSee also summaries from paperwithcode, [OntoNotes 5.0](https://paperswithcode.com/dataset/ontonotes-5-0) and [CoNLL-2012](https://paperswithcode.com/dataset/conll-2012-1)\n\nFor more detailed info of the dataset like annotation, tag set, etc., you can refer to the documents in the Mendeley repo mentioned above. \n", "citation": "@inproceedings{pradhan-etal-2013-towards,\n title = \"Towards Robust Linguistic Analysis using {O}nto{N}otes\",\n author = {Pradhan, Sameer and\n Moschitti, Alessandro and\n Xue, Nianwen and\n Ng, Hwee Tou and\n Bj{\"o}rkelund, Anders and\n Uryupina, Olga and\n Zhang, Yuchen and\n Zhong, Zhi},\n booktitle = \"Proceedings of the Seventeenth Conference on Computational Natural Language Learning\",\n month = aug,\n year = \"2013\",\n address = \"Sofia, Bulgaria\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://aclanthology.org/W13-3516\",\n pages = \"143--152\",\n}\n\nRalph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, Mohammed El-Bachouti, Robert Belvin, Ann Houston. OntoNotes Release 5.0 LDC2013T19. Web Download. Philadelphia: Linguistic Data Consortium, 2013.\n", "homepage": "https://conll.cemantix.org/2012/introduction.html", "license": "", "features": {"document_id": {"dtype": "string", "id": null, "_type": "Value"}, "sentences": [{"part_id": {"dtype": "int32", "id": null, "_type": "Value"}, "words": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "pos_tags": {"feature": {"num_classes": 51, "names": ["XX", "``", "$", "''", "*", ",", "-LRB-", "-RRB-", ".", ":", "ADD", "AFX", "CC", "CD", "DT", "EX", "FW", "HYPH", "IN", "JJ", "JJR", "JJS", "LS", "MD", "NFP", "NN", "NNP", "NNPS", "NNS", "PDT", "POS", "PRP", "PRP$", "RB", "RBR", "RBS", "RP", "SYM", "TO", "UH", "VB", "VBD", "VBG", "VBN", "VBP", "VBZ", "VERB", "WDT", "WP", "WP$", "WRB"], "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}, "parse_tree": {"dtype": "string", "id": null, "_type": "Value"}, "predicate_lemmas": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "predicate_framenet_ids": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "word_senses": {"feature": {"dtype": "float32", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "speaker": {"dtype": "string", "id": null, "_type": "Value"}, "named_entities": {"feature": {"num_classes": 37, "names": ["O", "B-PERSON", "I-PERSON", "B-NORP", "I-NORP", "B-FAC", "I-FAC", "B-ORG", "I-ORG", "B-GPE", "I-GPE", "B-LOC", "I-LOC", "B-PRODUCT", "I-PRODUCT", "B-DATE", "I-DATE", "B-TIME", "I-TIME", "B-PERCENT", "I-PERCENT", "B-MONEY", "I-MONEY", "B-QUANTITY", "I-QUANTITY", "B-ORDINAL", "I-ORDINAL", "B-CARDINAL", "I-CARDINAL", "B-EVENT", "I-EVENT", "B-WORK_OF_ART", "I-WORK_OF_ART", "B-LAW", "I-LAW", "B-LANGUAGE", "I-LANGUAGE"], "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}, "srl_frames": [{"verb": {"dtype": "string", "id": null, "_type": "Value"}, "frames": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}], "coref_spans": {"feature": {"feature": {"dtype": "int32", "id": null, "_type": "Value"}, "length": 3, "id": null, "_type": "Sequence"}, "length": -1, "id": null, "_type": "Sequence"}}]}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "conll2012_ontonotesv5", "config_name": "english_v12", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 171692045, "num_examples": 8106, "dataset_name": "conll2012_ontonotesv5"}, "validation": {"name": "validation", "num_bytes": 24264804, "num_examples": 1370, "dataset_name": "conll2012_ontonotesv5"}, "test": {"name": "test", "num_bytes": 18254144, "num_examples": 1200, "dataset_name": "conll2012_ontonotesv5"}}, "download_checksums": {"https://md-datasets-cache-zipfiles-prod.s3.eu-west-1.amazonaws.com/zmycy7t9h9-1.zip": {"num_bytes": 183323987, "checksum": null}}, "download_size": 183323987, "post_processing_size": null, "dataset_size": 214210993, "size_in_bytes": 397534980}}
dummy/arabic_v4/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:81686610f9c3cffc29e820a825b2f2ab85b7cdc86a3ca58842afecad2a3b5dc7
3
+ size 37013
dummy/chinese_v4/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4a04ae3d0a91e1912b7efdc81364f0cf7de2bad9fdda396073159dcec1332b68
3
+ size 11936
dummy/english_v12/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:df4240b490d6aa9a0c0d957b8c835981ab68b42a880a66154b40c662a30279fb
3
+ size 11863
dummy/english_v4/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0fea4942d9a20a28668f2f09098273b1009d6ac958d7ca286504a8e91c764132
3
+ size 12823