system HF staff commited on
Commit
43d4da7
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,195 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - expert-generated
6
+ languages:
7
+ - id
8
+ licenses:
9
+ - other-nergrit-license
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 10K<n<100K
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - structure-prediction
18
+ task_ids:
19
+ - named-entity-recognition
20
+ ---
21
+
22
+ # Dataset Card for [Dataset Name]
23
+
24
+ ## Table of Contents
25
+ - [Dataset Description](#dataset-description)
26
+ - [Dataset Summary](#dataset-summary)
27
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
28
+ - [Languages](#languages)
29
+ - [Dataset Structure](#dataset-structure)
30
+ - [Data Instances](#data-instances)
31
+ - [Data Fields](#data-instances)
32
+ - [Data Splits](#data-instances)
33
+ - [Dataset Creation](#dataset-creation)
34
+ - [Curation Rationale](#curation-rationale)
35
+ - [Source Data](#source-data)
36
+ - [Annotations](#annotations)
37
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
38
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
39
+ - [Social Impact of Dataset](#social-impact-of-dataset)
40
+ - [Discussion of Biases](#discussion-of-biases)
41
+ - [Other Known Limitations](#other-known-limitations)
42
+ - [Additional Information](#additional-information)
43
+ - [Dataset Curators](#dataset-curators)
44
+ - [Licensing Information](#licensing-information)
45
+ - [Citation Information](#citation-information)
46
+
47
+ ## Dataset Description
48
+
49
+ - **Homepage:** [PT Gria Inovasi Teknologi](https://grit.id/)
50
+ - **Repository:** [Nergrit Corpus](https://github.com/grit-id/nergrit-corpus)
51
+ - **Paper:**
52
+ - **Leaderboard:**
53
+ - **Point of Contact:** [Taufiqur Rohman](mailto:taufiq@grit.id)
54
+
55
+ ### Dataset Summary
56
+
57
+ Nergrit Corpus is a dataset collection of Indonesian Named Entity Recognition, Statement Extraction,
58
+ and Sentiment Analysis developed by [PT Gria Inovasi Teknologi (GRIT)](https://grit.id/).
59
+
60
+ ### Supported Tasks and Leaderboards
61
+
62
+ [More Information Needed]
63
+
64
+ ### Languages
65
+
66
+ Indonesian
67
+
68
+ ## Dataset Structure
69
+
70
+ A data point consists of sentences seperated by empty line and tab-seperated tokens and tags.
71
+ ```
72
+ {'id': '0',
73
+ 'tokens': ['Gubernur', 'Bank', 'Indonesia', 'menggelar', 'konferensi', 'pers'],
74
+ 'ner_tags': [9, 28, 28, 38, 38, 38],
75
+ }
76
+ ```
77
+ ### Data Instances
78
+
79
+ [More Information Needed]
80
+
81
+ ### Data Fields
82
+ - `id`: id of the sample
83
+ - `tokens`: the tokens of the example text
84
+ - `ner_tags`: the NER tags of each token
85
+
86
+ #### Named Entity Recognition
87
+ The ner_tags correspond to this list:
88
+ ```
89
+ "B-CRD", "B-DAT", "B-EVT", "B-FAC", "B-GPE", "B-LAN", "B-LAW", "B-LOC", "B-MON", "B-NOR",
90
+ "B-ORD", "B-ORG", "B-PER", "B-PRC", "B-PRD", "B-QTY", "B-REG", "B-TIM", "B-WOA",
91
+ "I-CRD", "I-DAT", "I-EVT", "I-FAC", "I-GPE", "I-LAN", "I-LAW", "I-LOC", "I-MON", "I-NOR",
92
+ "I-ORD", "I-ORG", "I-PER", "I-PRC", "I-PRD", "I-QTY", "I-REG", "I-TIM", "I-WOA", "O",
93
+ ```
94
+ The ner_tags have the same format as in the CoNLL shared task: a B denotes the first item of a phrase and an I any
95
+ non-initial word. The dataset contains 19 following entities
96
+ ```
97
+ 'CRD': Cardinal
98
+ 'DAT': Date
99
+ 'EVT': Event
100
+ 'FAC': Facility
101
+ 'GPE': Geopolitical Entity
102
+ 'LAW': Law Entity (such as Undang-Undang)
103
+ 'LOC': Location
104
+ 'MON': Money
105
+ 'NOR': Political Organization
106
+ 'ORD': Ordinal
107
+ 'ORG': Organization
108
+ 'PER': Person
109
+ 'PRC': Percent
110
+ 'PRD': Product
111
+ 'QTY': Quantity
112
+ 'REG': Religion
113
+ 'TIM': Time
114
+ 'WOA': Work of Art
115
+ 'LAN': Language
116
+ ```
117
+ #### Sentiment Analysis
118
+ The ner_tags correspond to this list:
119
+ ```
120
+ "B-NEG", "B-NET", "B-POS",
121
+ "I-NEG", "I-NET", "I-POS",
122
+ "O",
123
+ ```
124
+
125
+ #### Statement Extraction
126
+ The ner_tags correspond to this list:
127
+ ```
128
+ "B-BREL", "B-FREL", "B-STAT", "B-WHO",
129
+ "I-BREL", "I-FREL", "I-STAT", "I-WHO",
130
+ "O"
131
+ ```
132
+ The ner_tags have the same format as in the CoNLL shared task: a B denotes the first item of a phrase and an I any
133
+ non-initial word.
134
+
135
+ ### Data Splits
136
+
137
+ The dataset is splitted in to train, validation and test sets.
138
+
139
+ ## Dataset Creation
140
+
141
+ ### Curation Rationale
142
+
143
+ [More Information Needed]
144
+
145
+ ### Source Data
146
+
147
+ #### Initial Data Collection and Normalization
148
+
149
+ [More Information Needed]
150
+
151
+ #### Who are the source language producers?
152
+
153
+ [More Information Needed]
154
+
155
+ ### Annotations
156
+
157
+ #### Annotation process
158
+
159
+ [More Information Needed]
160
+
161
+ #### Who are the annotators?
162
+ The annotators are listed in the
163
+ [Nergrit Corpus repository](https://github.com/grit-id/nergrit-corpus)
164
+
165
+ ### Personal and Sensitive Information
166
+
167
+ [More Information Needed]
168
+
169
+ ## Considerations for Using the Data
170
+
171
+ ### Social Impact of Dataset
172
+
173
+ [More Information Needed]
174
+
175
+ ### Discussion of Biases
176
+
177
+ [More Information Needed]
178
+
179
+ ### Other Known Limitations
180
+
181
+ [More Information Needed]
182
+
183
+ ## Additional Information
184
+
185
+ ### Dataset Curators
186
+
187
+ [More Information Needed]
188
+
189
+ ### Licensing Information
190
+
191
+ [More Information Needed]
192
+
193
+ ### Citation Information
194
+
195
+ [More Information Needed]
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"ner": {"description": "Nergrit Corpus is a dataset collection for Indonesian Named Entity Recognition, Statement Extraction, and Sentiment\nAnalysis. id_nergrit_corpus is the Named Entity Recognition of this dataset collection which contains 18 entities as\nfollow:\n 'CRD': Cardinal\n 'DAT': Date\n 'EVT': Event\n 'FAC': Facility\n 'GPE': Geopolitical Entity\n 'LAW': Law Entity (such as Undang-Undang)\n 'LOC': Location\n 'MON': Money\n 'NOR': Political Organization\n 'ORD': Ordinal\n 'ORG': Organization\n 'PER': Person\n 'PRC': Percent\n 'PRD': Product\n 'QTY': Quantity\n 'REG': Religion\n 'TIM': Time\n 'WOA': Work of Art\n 'LAN': Language\n", "citation": "@inproceedings{id_nergrit_corpus,\n author = {Gria Inovasi Teknologi},\n title = {NERGRIT CORPUS},\n year = {2019},\n url = {https://github.com/grit-id/nergrit-corpus},\n}\n", "homepage": "https://github.com/grit-id/nergrit-corpus", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "ner_tags": {"feature": {"num_classes": 39, "names": ["B-CRD", "B-DAT", "B-EVT", "B-FAC", "B-GPE", "B-LAN", "B-LAW", "B-LOC", "B-MON", "B-NOR", "B-ORD", "B-ORG", "B-PER", "B-PRC", "B-PRD", "B-QTY", "B-REG", "B-TIM", "B-WOA", "I-CRD", "I-DAT", "I-EVT", "I-FAC", "I-GPE", "I-LAN", "I-LAW", "I-LOC", "I-MON", "I-NOR", "I-ORD", "I-ORG", "I-PER", "I-PRC", "I-PRD", "I-QTY", "I-REG", "I-TIM", "I-WOA", "O"], "names_file": null, "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "id_nergrit_corpus", "config_name": "ner", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 5428411, "num_examples": 12532, "dataset_name": "id_nergrit_corpus"}, "test": {"name": "test", "num_bytes": 1135577, "num_examples": 2399, "dataset_name": "id_nergrit_corpus"}, "validation": {"name": "validation", "num_bytes": 1086437, "num_examples": 2521, "dataset_name": "id_nergrit_corpus"}}, "download_checksums": {"https://github.com/cahya-wirawan/indonesian-language-models/raw/master/data/nergrit-corpus_20190726_corrected.tgz": {"num_bytes": 14988232, "checksum": "ac53b61612d6d53c8c800a67d70b6b800f662ab7029aa622163834945efa85d6"}}, "download_size": 14988232, "post_processing_size": null, "dataset_size": 7650425, "size_in_bytes": 22638657}, "sentiment": {"description": "Nergrit Corpus is a dataset collection for Indonesian Named Entity Recognition, Statement Extraction, and Sentiment\nAnalysis. id_nergrit_corpus is the Named Entity Recognition of this dataset collection which contains 18 entities as\nfollow:\n 'CRD': Cardinal\n 'DAT': Date\n 'EVT': Event\n 'FAC': Facility\n 'GPE': Geopolitical Entity\n 'LAW': Law Entity (such as Undang-Undang)\n 'LOC': Location\n 'MON': Money\n 'NOR': Political Organization\n 'ORD': Ordinal\n 'ORG': Organization\n 'PER': Person\n 'PRC': Percent\n 'PRD': Product\n 'QTY': Quantity\n 'REG': Religion\n 'TIM': Time\n 'WOA': Work of Art\n 'LAN': Language\n", "citation": "@inproceedings{id_nergrit_corpus,\n author = {Gria Inovasi Teknologi},\n title = {NERGRIT CORPUS},\n year = {2019},\n url = {https://github.com/grit-id/nergrit-corpus},\n}\n", "homepage": "https://github.com/grit-id/nergrit-corpus", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "ner_tags": {"feature": {"num_classes": 7, "names": ["B-NEG", "B-NET", "B-POS", "I-NEG", "I-NET", "I-POS", "O"], "names_file": null, "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "id_nergrit_corpus", "config_name": "sentiment", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 3167972, "num_examples": 7485, "dataset_name": "id_nergrit_corpus"}, "test": {"name": "test", "num_bytes": 1097517, "num_examples": 2317, "dataset_name": "id_nergrit_corpus"}, "validation": {"name": "validation", "num_bytes": 337679, "num_examples": 782, "dataset_name": "id_nergrit_corpus"}}, "download_checksums": {"https://github.com/cahya-wirawan/indonesian-language-models/raw/master/data/nergrit-corpus_20190726_corrected.tgz": {"num_bytes": 14988232, "checksum": "ac53b61612d6d53c8c800a67d70b6b800f662ab7029aa622163834945efa85d6"}}, "download_size": 14988232, "post_processing_size": null, "dataset_size": 4603168, "size_in_bytes": 19591400}, "statement": {"description": "Nergrit Corpus is a dataset collection for Indonesian Named Entity Recognition, Statement Extraction, and Sentiment\nAnalysis. id_nergrit_corpus is the Named Entity Recognition of this dataset collection which contains 18 entities as\nfollow:\n 'CRD': Cardinal\n 'DAT': Date\n 'EVT': Event\n 'FAC': Facility\n 'GPE': Geopolitical Entity\n 'LAW': Law Entity (such as Undang-Undang)\n 'LOC': Location\n 'MON': Money\n 'NOR': Political Organization\n 'ORD': Ordinal\n 'ORG': Organization\n 'PER': Person\n 'PRC': Percent\n 'PRD': Product\n 'QTY': Quantity\n 'REG': Religion\n 'TIM': Time\n 'WOA': Work of Art\n 'LAN': Language\n", "citation": "@inproceedings{id_nergrit_corpus,\n author = {Gria Inovasi Teknologi},\n title = {NERGRIT CORPUS},\n year = {2019},\n url = {https://github.com/grit-id/nergrit-corpus},\n}\n", "homepage": "https://github.com/grit-id/nergrit-corpus", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "ner_tags": {"feature": {"num_classes": 9, "names": ["B-BREL", "B-FREL", "B-STAT", "B-WHO", "I-BREL", "I-FREL", "I-STAT", "I-WHO", "O"], "names_file": null, "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "id_nergrit_corpus", "config_name": "statement", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 1469081, "num_examples": 2405, "dataset_name": "id_nergrit_corpus"}, "test": {"name": "test", "num_bytes": 182553, "num_examples": 335, "dataset_name": "id_nergrit_corpus"}, "validation": {"name": "validation", "num_bytes": 105119, "num_examples": 176, "dataset_name": "id_nergrit_corpus"}}, "download_checksums": {"https://github.com/cahya-wirawan/indonesian-language-models/raw/master/data/nergrit-corpus_20190726_corrected.tgz": {"num_bytes": 14988232, "checksum": "ac53b61612d6d53c8c800a67d70b6b800f662ab7029aa622163834945efa85d6"}}, "download_size": 14988232, "post_processing_size": null, "dataset_size": 1756753, "size_in_bytes": 16744985}}
dummy/ner/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:30aac36dbdd884dc8c6a719389684c8f85f229849a6184f437a015a44b4f7100
3
+ size 6511
dummy/sentiment/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:30aac36dbdd884dc8c6a719389684c8f85f229849a6184f437a015a44b4f7100
3
+ size 6511
dummy/statement/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:30aac36dbdd884dc8c6a719389684c8f85f229849a6184f437a015a44b4f7100
3
+ size 6511
id_nergrit_corpus.py ADDED
@@ -0,0 +1,240 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """Nergrit Corpus"""
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import logging
20
+ import os
21
+
22
+ import datasets
23
+
24
+
25
+ _CITATION = """\
26
+ @inproceedings{id_nergrit_corpus,
27
+ author = {Gria Inovasi Teknologi},
28
+ title = {NERGRIT CORPUS},
29
+ year = {2019},
30
+ url = {https://github.com/grit-id/nergrit-corpus},
31
+ }
32
+ """
33
+
34
+ _DESCRIPTION = """\
35
+ Nergrit Corpus is a dataset collection for Indonesian Named Entity Recognition, Statement Extraction, and Sentiment
36
+ Analysis. id_nergrit_corpus is the Named Entity Recognition of this dataset collection which contains 18 entities as
37
+ follow:
38
+ 'CRD': Cardinal
39
+ 'DAT': Date
40
+ 'EVT': Event
41
+ 'FAC': Facility
42
+ 'GPE': Geopolitical Entity
43
+ 'LAW': Law Entity (such as Undang-Undang)
44
+ 'LOC': Location
45
+ 'MON': Money
46
+ 'NOR': Political Organization
47
+ 'ORD': Ordinal
48
+ 'ORG': Organization
49
+ 'PER': Person
50
+ 'PRC': Percent
51
+ 'PRD': Product
52
+ 'QTY': Quantity
53
+ 'REG': Religion
54
+ 'TIM': Time
55
+ 'WOA': Work of Art
56
+ 'LAN': Language
57
+ """
58
+
59
+ _HOMEPAGE = "https://github.com/grit-id/nergrit-corpus"
60
+
61
+ _LICENSE = ""
62
+
63
+ _URLs = [
64
+ "https://github.com/cahya-wirawan/indonesian-language-models/raw/master/data/nergrit-corpus_20190726_corrected.tgz",
65
+ "https://cloud.uncool.ai/index.php/s/2QEcMrgwkjMAo4o/download",
66
+ ]
67
+
68
+
69
+ class IdNergritCorpusConfig(datasets.BuilderConfig):
70
+ """BuilderConfig for IdNergritCorpus"""
71
+
72
+ def __init__(self, label_classes=None, **kwargs):
73
+ """BuilderConfig for IdNergritCorpus.
74
+ Args:
75
+ **kwargs: keyword arguments forwarded to super.
76
+ """
77
+ super(IdNergritCorpusConfig, self).__init__(**kwargs)
78
+ self.label_classes = label_classes
79
+
80
+
81
+ class IdNergritCorpus(datasets.GeneratorBasedBuilder):
82
+ VERSION = datasets.Version("1.1.0")
83
+
84
+ BUILDER_CONFIGS = [
85
+ IdNergritCorpusConfig(
86
+ name="ner",
87
+ version=VERSION,
88
+ description="Named Entity Recognition dataset of Nergrit Corpus",
89
+ label_classes=[
90
+ "B-CRD",
91
+ "B-DAT",
92
+ "B-EVT",
93
+ "B-FAC",
94
+ "B-GPE",
95
+ "B-LAN",
96
+ "B-LAW",
97
+ "B-LOC",
98
+ "B-MON",
99
+ "B-NOR",
100
+ "B-ORD",
101
+ "B-ORG",
102
+ "B-PER",
103
+ "B-PRC",
104
+ "B-PRD",
105
+ "B-QTY",
106
+ "B-REG",
107
+ "B-TIM",
108
+ "B-WOA",
109
+ "I-CRD",
110
+ "I-DAT",
111
+ "I-EVT",
112
+ "I-FAC",
113
+ "I-GPE",
114
+ "I-LAN",
115
+ "I-LAW",
116
+ "I-LOC",
117
+ "I-MON",
118
+ "I-NOR",
119
+ "I-ORD",
120
+ "I-ORG",
121
+ "I-PER",
122
+ "I-PRC",
123
+ "I-PRD",
124
+ "I-QTY",
125
+ "I-REG",
126
+ "I-TIM",
127
+ "I-WOA",
128
+ "O",
129
+ ],
130
+ ),
131
+ IdNergritCorpusConfig(
132
+ name="sentiment",
133
+ version=VERSION,
134
+ description="Sentiment Analysis dataset of Nergrit Corpus",
135
+ label_classes=[
136
+ "B-NEG",
137
+ "B-NET",
138
+ "B-POS",
139
+ "I-NEG",
140
+ "I-NET",
141
+ "I-POS",
142
+ "O",
143
+ ],
144
+ ),
145
+ IdNergritCorpusConfig(
146
+ name="statement",
147
+ version=VERSION,
148
+ description="Statement Extraction dataset of Nergrit Corpus",
149
+ label_classes=[
150
+ "B-BREL",
151
+ "B-FREL",
152
+ "B-STAT",
153
+ "B-WHO",
154
+ "I-BREL",
155
+ "I-FREL",
156
+ "I-STAT",
157
+ "I-WHO",
158
+ "O",
159
+ ],
160
+ ),
161
+ ]
162
+
163
+ def _info(self):
164
+ features = datasets.Features(
165
+ {
166
+ "id": datasets.Value("string"),
167
+ "tokens": datasets.Sequence(datasets.Value("string")),
168
+ "ner_tags": datasets.Sequence(datasets.features.ClassLabel(names=self.config.label_classes)),
169
+ }
170
+ )
171
+ return datasets.DatasetInfo(
172
+ description=_DESCRIPTION,
173
+ features=features,
174
+ supervised_keys=None,
175
+ homepage=_HOMEPAGE,
176
+ license=_LICENSE,
177
+ citation=_CITATION,
178
+ )
179
+
180
+ def _split_generators(self, dl_manager):
181
+ my_urls = _URLs[0]
182
+ data_dir = dl_manager.download_and_extract(my_urls)
183
+ return [
184
+ datasets.SplitGenerator(
185
+ name=datasets.Split.TRAIN,
186
+ gen_kwargs={
187
+ "filepath": os.path.join(
188
+ data_dir, "nergrit-corpus/{}/data/train_corrected.txt".format(self.config.name)
189
+ ),
190
+ "split": "train",
191
+ },
192
+ ),
193
+ datasets.SplitGenerator(
194
+ name=datasets.Split.TEST,
195
+ gen_kwargs={
196
+ "filepath": os.path.join(
197
+ data_dir, "nergrit-corpus/{}/data/test_corrected.txt".format(self.config.name)
198
+ ),
199
+ "split": "test",
200
+ },
201
+ ),
202
+ datasets.SplitGenerator(
203
+ name=datasets.Split.VALIDATION,
204
+ gen_kwargs={
205
+ "filepath": os.path.join(
206
+ data_dir, "nergrit-corpus/{}/data/valid_corrected.txt".format(self.config.name)
207
+ ),
208
+ "split": "dev",
209
+ },
210
+ ),
211
+ ]
212
+
213
+ def _generate_examples(self, filepath, split):
214
+ logging.info("⏳ Generating %s examples from = %s", split, filepath)
215
+ with open(filepath, encoding="utf-8") as f:
216
+ guid = 0
217
+ tokens = []
218
+ ner_tags = []
219
+ for line in f:
220
+ splits = line.strip().split()
221
+ if len(splits) != 2:
222
+ if tokens:
223
+ assert len(tokens) == len(ner_tags), "word len doesn't match label length"
224
+ yield guid, {
225
+ "id": str(guid),
226
+ "tokens": tokens,
227
+ "ner_tags": ner_tags,
228
+ }
229
+ guid += 1
230
+ tokens = []
231
+ ner_tags = []
232
+ else:
233
+ tokens.append(splits[0])
234
+ ner_tags.append(splits[1].rstrip())
235
+ # last example
236
+ yield guid, {
237
+ "id": str(guid),
238
+ "tokens": tokens,
239
+ "ner_tags": ner_tags,
240
+ }