system HF staff commited on
Commit
dbc4d55
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,175 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - expert-generated
6
+ languages:
7
+ - ha
8
+ licenses:
9
+ - cc-by-4-0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 1K<n<10K
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - structure-prediction
18
+ task_ids:
19
+ - named-entity-recognition
20
+ ---
21
+
22
+ # Dataset Card for Hausa VOA NER Corpus
23
+
24
+ ## Table of Contents
25
+ - [Dataset Description](#dataset-description)
26
+ - [Dataset Summary](#dataset-summary)
27
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
28
+ - [Languages](#languages)
29
+ - [Dataset Structure](#dataset-structure)
30
+ - [Data Instances](#data-instances)
31
+ - [Data Fields](#data-instances)
32
+ - [Data Splits](#data-instances)
33
+ - [Dataset Creation](#dataset-creation)
34
+ - [Curation Rationale](#curation-rationale)
35
+ - [Source Data](#source-data)
36
+ - [Annotations](#annotations)
37
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
38
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
39
+ - [Social Impact of Dataset](#social-impact-of-dataset)
40
+ - [Discussion of Biases](#discussion-of-biases)
41
+ - [Other Known Limitations](#other-known-limitations)
42
+ - [Additional Information](#additional-information)
43
+ - [Dataset Curators](#dataset-curators)
44
+ - [Licensing Information](#licensing-information)
45
+ - [Citation Information](#citation-information)
46
+
47
+ ## Dataset Description
48
+
49
+ - **Homepage:** https://www.aclweb.org/anthology/2020.emnlp-main.204/
50
+ - **Repository:** [Hausa VOA NER](https://github.com/uds-lsv/transfer-distant-transformer-african/tree/master/data/hausa_ner)
51
+ - **Paper:** https://www.aclweb.org/anthology/2020.emnlp-main.204/
52
+ - **Leaderboard:**
53
+ - **Point of Contact:** [David Adelani](mailto:didelani@lsv.uni-saarland.de)
54
+
55
+ ### Dataset Summary
56
+ The Hausa VOA NER is a named entity recognition (NER) dataset for Hausa language based on the [VOA Hausa news](https://www.voahausa.com/) corpus.
57
+ ### Supported Tasks and Leaderboards
58
+
59
+ [More Information Needed]
60
+
61
+ ### Languages
62
+
63
+ The language supported is Hausa.
64
+
65
+ ## Dataset Structure
66
+
67
+ ### Data Instances
68
+
69
+ A data point consists of sentences seperated by empty line and tab-seperated tokens and tags.
70
+ {'id': '0',
71
+ 'ner_tags': [B-PER, 0, 0, B-LOC, 0],
72
+ 'tokens': ['Trump', 'ya', 'ce', 'Rasha', 'ma']
73
+ }
74
+
75
+ ### Data Fields
76
+
77
+ - `id`: id of the sample
78
+ - `tokens`: the tokens of the example text
79
+ - `ner_tags`: the NER tags of each token
80
+
81
+ The NER tags correspond to this list:
82
+ ```
83
+ "O", "B-PER", "I-PER", "B-ORG", "I-ORG", "B-LOC", "I-LOC", "B-DATE", "I-DATE",
84
+ ```
85
+ The NER tags have the same format as in the CoNLL shared task: a B denotes the first item of a phrase and an I any non-initial word. There are four types of phrases: person names (PER), organizations (ORG), locations (LOC) and dates & times (DATE). (O) is used for tokens not considered part of any named entity.
86
+
87
+ ### Data Splits
88
+
89
+ Training (1,014 sentences), validation (145 sentences) and test split (291 sentences)
90
+
91
+ ## Dataset Creation
92
+
93
+ ### Curation Rationale
94
+
95
+ The data was created to help introduce resources to new language - Hausa.
96
+
97
+ [More Information Needed]
98
+
99
+ ### Source Data
100
+
101
+ #### Initial Data Collection and Normalization
102
+
103
+ The dataset is based on the news domain and was crawled from [VOA Hausa news](https://www.voahausa.com/).
104
+
105
+
106
+ [More Information Needed]
107
+
108
+ #### Who are the source language producers?
109
+
110
+ The dataset was collected from VOA Hausa news. Most of the texts used in creating the Hausa VOA NER are news stories from Nigeria, Niger Republic, United States, and other parts of the world.
111
+
112
+ [More Information Needed]
113
+
114
+ ### Annotations
115
+ Named entity recognition annotation
116
+ #### Annotation process
117
+
118
+ [More Information Needed]
119
+
120
+ #### Who are the annotators?
121
+
122
+ The data was annotated by Jesujoba Alabi and David Adelani for the paper:
123
+ [Transfer Learning and Distant Supervision for Multilingual Transformer Models: A Study on African Languages](https://www.aclweb.org/anthology/2020.emnlp-main.204/).
124
+
125
+ [More Information Needed]
126
+
127
+ ### Personal and Sensitive Information
128
+
129
+ [More Information Needed]
130
+
131
+ ## Considerations for Using the Data
132
+
133
+ ### Social Impact of Dataset
134
+
135
+ [More Information Needed]
136
+
137
+ ### Discussion of Biases
138
+
139
+ [More Information Needed]
140
+
141
+ ### Other Known Limitations
142
+
143
+ [More Information Needed]
144
+
145
+ ## Additional Information
146
+
147
+ ### Dataset Curators
148
+
149
+ The annotated data sets were developed by students of Saarland University, Saarbrücken, Germany .
150
+
151
+
152
+ ### Licensing Information
153
+
154
+ The data is under the [Creative Commons Attribution 4.0 ](https://creativecommons.org/licenses/by/4.0/)
155
+
156
+ ### Citation Information
157
+ ```
158
+ @inproceedings{hedderich-etal-2020-transfer,
159
+ title = "Transfer Learning and Distant Supervision for Multilingual Transformer Models: A Study on {A}frican Languages",
160
+ author = "Hedderich, Michael A. and
161
+ Adelani, David and
162
+ Zhu, Dawei and
163
+ Alabi, Jesujoba and
164
+ Markus, Udia and
165
+ Klakow, Dietrich",
166
+ booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
167
+ month = nov,
168
+ year = "2020",
169
+ address = "Online",
170
+ publisher = "Association for Computational Linguistics",
171
+ url = "https://www.aclweb.org/anthology/2020.emnlp-main.204",
172
+ doi = "10.18653/v1/2020.emnlp-main.204",
173
+ pages = "2580--2591",
174
+ }
175
+ ```
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"hausa_voa_ner": {"description": "The Hausa VOA NER dataset is a labeled dataset for named entity recognition in Hausa. The texts were obtained from\nHausa Voice of America News articles https://www.voahausa.com/ . We concentrate on\nfour types of named entities: persons [PER], locations [LOC], organizations [ORG], and dates & time [DATE].\n\nThe Hausa VOA NER data files contain 2 columns separated by a tab ('\t'). Each word has been put on a separate line and\nthere is an empty line after each sentences i.e the CoNLL format. The first item on each line is a word, the second\nis the named entity tag. The named entity tags have the format I-TYPE which means that the word is inside a phrase\nof type TYPE. For every multi-word expression like 'New York', the first word gets a tag B-TYPE and the subsequent words\nhave tags I-TYPE, a word with tag O is not part of a phrase. The dataset is in the BIO tagging scheme.\n\nFor more details, see https://www.aclweb.org/anthology/2020.emnlp-main.204/\n", "citation": "@inproceedings{hedderich-etal-2020-transfer,\n title = \"Transfer Learning and Distant Supervision for Multilingual Transformer Models: A Study on {A}frican Languages\",\n author = \"Hedderich, Michael A. and\n Adelani, David and\n Zhu, Dawei and\n Alabi, Jesujoba and\n Markus, Udia and\n Klakow, Dietrich\",\n booktitle = \"Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)\",\n month = nov,\n year = \"2020\",\n address = \"Online\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://www.aclweb.org/anthology/2020.emnlp-main.204\",\n doi = \"10.18653/v1/2020.emnlp-main.204\",\n pages = \"2580--2591\",\n}\n", "homepage": "https://www.aclweb.org/anthology/2020.emnlp-main.204/", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "ner_tags": {"feature": {"num_classes": 9, "names": ["O", "B-PER", "I-PER", "B-ORG", "I-ORG", "B-LOC", "I-LOC", "B-DATE", "I-DATE"], "names_file": null, "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "hausa_voa_ner", "config_name": "hausa_voa_ner", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 483634, "num_examples": 1015, "dataset_name": "hausa_voa_ner"}, "validation": {"name": "validation", "num_bytes": 69673, "num_examples": 146, "dataset_name": "hausa_voa_ner"}, "test": {"name": "test", "num_bytes": 139227, "num_examples": 292, "dataset_name": "hausa_voa_ner"}}, "download_checksums": {"https://github.com/uds-lsv/transfer-distant-transformer-african/raw/master/data/hausa_ner/train_clean.tsv": {"num_bytes": 226686, "checksum": "ab8cf3e36e6ccba84168c8ddfd148b7abf20f97bb150c19cb579cc667de9b20b"}, "https://github.com/uds-lsv/transfer-distant-transformer-african/raw/master/data/hausa_ner/dev.tsv": {"num_bytes": 33139, "checksum": "f1bf48475498ed6c481840697c8a21d62de6e9044636a296bfe56d15bcd2e044"}, "https://github.com/uds-lsv/transfer-distant-transformer-african/raw/master/data/hausa_ner/test.tsv": {"num_bytes": 65137, "checksum": "139656b6a36946cbe3d19a8dd06689c23429c07fe82d59a9de631b8a0e4e69e3"}}, "download_size": 324962, "post_processing_size": null, "dataset_size": 692534, "size_in_bytes": 1017496}}
dummy/hausa_voa_ner/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7a3784cf65b687612c09b4740af3b42921a85cb5a1d9d1abacdaba2303ac97b9
3
+ size 569
hausa_voa_ner.py ADDED
@@ -0,0 +1,162 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ """Introduction to the Yoruba GV NER dataset: A Yoruba Global Voices (News) Named Entity Recognition Dataset"""
17
+
18
+
19
+ from __future__ import absolute_import, division, print_function
20
+
21
+ import logging
22
+
23
+ import datasets
24
+
25
+
26
+ # TODO: Add BibTeX citation
27
+ # Find for instance the citation on arxiv or on the dataset repo/website
28
+ _CITATION = """\
29
+ @inproceedings{hedderich-etal-2020-transfer,
30
+ title = "Transfer Learning and Distant Supervision for Multilingual Transformer Models: A Study on {A}frican Languages",
31
+ author = "Hedderich, Michael A. and
32
+ Adelani, David and
33
+ Zhu, Dawei and
34
+ Alabi, Jesujoba and
35
+ Markus, Udia and
36
+ Klakow, Dietrich",
37
+ booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
38
+ month = nov,
39
+ year = "2020",
40
+ address = "Online",
41
+ publisher = "Association for Computational Linguistics",
42
+ url = "https://www.aclweb.org/anthology/2020.emnlp-main.204",
43
+ doi = "10.18653/v1/2020.emnlp-main.204",
44
+ pages = "2580--2591",
45
+ }
46
+ """
47
+
48
+ # TODO: Add description of the dataset here
49
+ # You can copy an official description
50
+ _DESCRIPTION = """\
51
+ The Hausa VOA NER dataset is a labeled dataset for named entity recognition in Hausa. The texts were obtained from
52
+ Hausa Voice of America News articles https://www.voahausa.com/ . We concentrate on
53
+ four types of named entities: persons [PER], locations [LOC], organizations [ORG], and dates & time [DATE].
54
+
55
+ The Hausa VOA NER data files contain 2 columns separated by a tab ('\t'). Each word has been put on a separate line and
56
+ there is an empty line after each sentences i.e the CoNLL format. The first item on each line is a word, the second
57
+ is the named entity tag. The named entity tags have the format I-TYPE which means that the word is inside a phrase
58
+ of type TYPE. For every multi-word expression like 'New York', the first word gets a tag B-TYPE and the subsequent words
59
+ have tags I-TYPE, a word with tag O is not part of a phrase. The dataset is in the BIO tagging scheme.
60
+
61
+ For more details, see https://www.aclweb.org/anthology/2020.emnlp-main.204/
62
+ """
63
+
64
+ _URL = "https://github.com/uds-lsv/transfer-distant-transformer-african/raw/master/data/hausa_ner/"
65
+ _TRAINING_FILE = "train_clean.tsv"
66
+ _DEV_FILE = "dev.tsv"
67
+ _TEST_FILE = "test.tsv"
68
+
69
+
70
+ class HausaVoaNerConfig(datasets.BuilderConfig):
71
+ """BuilderConfig for HausaVoaNer"""
72
+
73
+ def __init__(self, **kwargs):
74
+ """BuilderConfig for HausaVoaNer.
75
+ Args:
76
+ **kwargs: keyword arguments forwarded to super.
77
+ """
78
+ super(HausaVoaNerConfig, self).__init__(**kwargs)
79
+
80
+
81
+ class HausaVoaNer(datasets.GeneratorBasedBuilder):
82
+ """Hausa VOA NER dataset."""
83
+
84
+ BUILDER_CONFIGS = [
85
+ HausaVoaNerConfig(
86
+ name="hausa_voa_ner", version=datasets.Version("1.0.0"), description="Hausa VOA NER dataset"
87
+ ),
88
+ ]
89
+
90
+ def _info(self):
91
+ return datasets.DatasetInfo(
92
+ description=_DESCRIPTION,
93
+ features=datasets.Features(
94
+ {
95
+ "id": datasets.Value("string"),
96
+ "tokens": datasets.Sequence(datasets.Value("string")),
97
+ "ner_tags": datasets.Sequence(
98
+ datasets.features.ClassLabel(
99
+ names=[
100
+ "O",
101
+ "B-PER",
102
+ "I-PER",
103
+ "B-ORG",
104
+ "I-ORG",
105
+ "B-LOC",
106
+ "I-LOC",
107
+ "B-DATE",
108
+ "I-DATE",
109
+ ]
110
+ )
111
+ ),
112
+ }
113
+ ),
114
+ supervised_keys=None,
115
+ homepage="https://www.aclweb.org/anthology/2020.emnlp-main.204/",
116
+ citation=_CITATION,
117
+ )
118
+
119
+ def _split_generators(self, dl_manager):
120
+ """Returns SplitGenerators."""
121
+ urls_to_download = {
122
+ "train": f"{_URL}{_TRAINING_FILE}",
123
+ "dev": f"{_URL}{_DEV_FILE}",
124
+ "test": f"{_URL}{_TEST_FILE}",
125
+ }
126
+ downloaded_files = dl_manager.download_and_extract(urls_to_download)
127
+
128
+ return [
129
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": downloaded_files["train"]}),
130
+ datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": downloaded_files["dev"]}),
131
+ datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": downloaded_files["test"]}),
132
+ ]
133
+
134
+ def _generate_examples(self, filepath):
135
+ logging.info("⏳ Generating examples from = %s", filepath)
136
+ with open(filepath, encoding="utf-8") as f:
137
+ guid = 0
138
+ tokens = []
139
+ ner_tags = []
140
+ for line in f:
141
+ line = line.strip()
142
+ if line.startswith("-DOCSTART-") or line == "" or line == "\n":
143
+ if tokens:
144
+ yield guid, {
145
+ "id": str(guid),
146
+ "tokens": tokens,
147
+ "ner_tags": ner_tags,
148
+ }
149
+ guid += 1
150
+ tokens = []
151
+ ner_tags = []
152
+ else:
153
+ # yoruba_gv_ner tokens are tab separated
154
+ splits = line.strip().split("\t")
155
+ tokens.append(splits[0])
156
+ ner_tags.append(splits[1].rstrip())
157
+ # last example
158
+ yield guid, {
159
+ "id": str(guid),
160
+ "tokens": tokens,
161
+ "ner_tags": ner_tags,
162
+ }