system HF staff commited on
Commit
9a5cab1
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

Files changed (5) hide show
  1. .gitattributes +27 -0
  2. README.md +171 -0
  3. dataset_infos.json +1 -0
  4. dummy/0.0.0/dummy_data.zip +3 -0
  5. ilist.py +102 -0
.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,171 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ task_categories:
3
+ - text-classification
4
+ multilinguality:
5
+ - multilingual
6
+ task_ids:
7
+ - text-classification-other-language-identification
8
+ languages:
9
+ - hi
10
+ - awa
11
+ - bho
12
+ - mag
13
+ - bra
14
+ annotations_creators:
15
+ - unknown
16
+ source_datasets:
17
+ - original
18
+ size_categories:
19
+ - 10K<n<100K
20
+ licenses:
21
+ - unknown
22
+ ---
23
+
24
+ # Dataset Card Creation Guide
25
+
26
+ ## Table of Contents
27
+ - [Dataset Card Creation Guide](#dataset-card-creation-guide)
28
+ - [Table of Contents](#table-of-contents)
29
+ - [Dataset Description](#dataset-description)
30
+ - [Dataset Summary](#dataset-summary)
31
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
32
+ - [Languages](#languages)
33
+ - [Dataset Structure](#dataset-structure)
34
+ - [Data Instances](#data-instances)
35
+ - [Data Fields](#data-fields)
36
+ - [Data Splits](#data-splits)
37
+ - [Dataset Creation](#dataset-creation)
38
+ - [Curation Rationale](#curation-rationale)
39
+ - [Source Data](#source-data)
40
+ - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
41
+ - [Who are the source language producers?](#who-are-the-source-language-producers)
42
+ - [Annotations](#annotations)
43
+ - [Annotation process](#annotation-process)
44
+ - [Who are the annotators?](#who-are-the-annotators)
45
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
46
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
47
+ - [Social Impact of Dataset](#social-impact-of-dataset)
48
+ - [Discussion of Biases](#discussion-of-biases)
49
+ - [Other Known Limitations](#other-known-limitations)
50
+ - [Additional Information](#additional-information)
51
+ - [Dataset Curators](#dataset-curators)
52
+ - [Licensing Information](#licensing-information)
53
+ - [Citation Information](#citation-information)
54
+
55
+ ## Dataset Description
56
+
57
+ - **Homepage:** [GitHub](https://github.com/kmi-linguistics/vardial2018)
58
+ - **Repository:** [GitHub](https://github.com/kmi-linguistics/vardial2018)
59
+ - **Paper:** [Link](https://www.aclweb.org/anthology/W18-3900/)
60
+ - **Leaderboard:**
61
+ - **Point of Contact:** linguistics.kmi@gmail.com
62
+
63
+ ### Dataset Summary
64
+
65
+ This datasets is introduced in a task which aimed at identifying 5 closely-related languages of Indo-Aryan language family – Hindi (also known as Khari Boli), Braj Bhasha, Awadhi, Bhojpuri and Magahi. These languages form part of a continuum starting from Western Uttar Pradesh (Hindi and Braj Bhasha) to Eastern Uttar Pradesh (Awadhi and Bhojpuri) and the neighbouring Eastern state of Bihar (Bhojpuri and Magahi). For this task, participants were provided with a dataset of approximately 15,000 sentences in each language, mainly from the domain of literature, published over the web as well as in print.
66
+
67
+ ### Supported Tasks and Leaderboards
68
+
69
+ [More Information Needed]
70
+
71
+ ### Languages
72
+
73
+ Hindi, Braj Bhasha, Awadhi, Bhojpuri and Magahi
74
+
75
+ ## Dataset Structure
76
+
77
+ ### Data Instances
78
+
79
+ ```
80
+ {
81
+ "language_id": 4,
82
+ "text": 'तभी बारिश हुई थी जिसका गीलापन इन मूर्तियों को इन तस्वीरों में एक अलग रूप देता है .'
83
+ }
84
+ ```
85
+
86
+ ### Data Fields
87
+
88
+ - `text`: text which you want to classify
89
+ - `language_id`: label for the text as an integer from 0 to 4
90
+ The language ids correspond to the following languages: "AWA", "BRA", "MAG", "BHO", "HIN".
91
+
92
+ ### Data Splits
93
+
94
+ | | train | valid | test |
95
+ |----------------------|-------|-------|-------|
96
+ | # of input sentences | 70351 | 9692 | 10329 |
97
+
98
+ ## Dataset Creation
99
+
100
+ ### Curation Rationale
101
+
102
+ [More Information Needed]
103
+
104
+ ### Source Data
105
+
106
+ [More Information Needed]
107
+
108
+ #### Initial Data Collection and Normalization
109
+
110
+ [More Information Needed]
111
+
112
+ #### Who are the source language producers?
113
+
114
+ [More Information Needed]
115
+
116
+ ### Annotations
117
+
118
+ #### Annotation process
119
+
120
+ [More Information Needed]
121
+
122
+ #### Who are the annotators?
123
+
124
+ [More Information Needed]
125
+
126
+ ### Personal and Sensitive Information
127
+
128
+ [More Information Needed]
129
+
130
+ ## Considerations for Using the Data
131
+
132
+ ### Social Impact of Dataset
133
+
134
+ [More Information Needed]
135
+
136
+ ### Discussion of Biases
137
+
138
+ [More Information Needed]
139
+
140
+ ### Other Known Limitations
141
+
142
+ [More Information Needed]
143
+
144
+ ## Additional Information
145
+
146
+ ### Dataset Curators
147
+
148
+ [More Information Needed]
149
+
150
+ ### Licensing Information
151
+
152
+ [More Information Needed]
153
+
154
+ ### Citation Information
155
+
156
+ ```
157
+ @proceedings{ws-2018-nlp-similar,
158
+ title = "Proceedings of the Fifth Workshop on {NLP} for Similar Languages, Varieties and Dialects ({V}ar{D}ial 2018)",
159
+ editor = {Zampieri, Marcos and
160
+ Nakov, Preslav and
161
+ Ljube{\v{s}}i{\'c}, Nikola and
162
+ Tiedemann, J{\"o}rg and
163
+ Malmasi, Shervin and
164
+ Ali, Ahmed},
165
+ month = aug,
166
+ year = "2018",
167
+ address = "Santa Fe, New Mexico, USA",
168
+ publisher = "Association for Computational Linguistics",
169
+ url = "https://www.aclweb.org/anthology/W18-3900",
170
+ }
171
+ ```
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"default": {"description": "This dataset is introduced in a task which aimed at identifying 5 closely-related languages of Indo-Aryan language family \u2013\nHindi (also known as Khari Boli), Braj Bhasha, Awadhi, Bhojpuri, and Magahi.\n", "citation": "@proceedings{ws-2018-nlp-similar,\n title = \"Proceedings of the Fifth Workshop on {NLP} for Similar Languages, Varieties and Dialects ({V}ar{D}ial 2018)\",\n editor = {Zampieri, Marcos and\n Nakov, Preslav and\n Ljube{\u000b{s}}i{'c}, Nikola and\n Tiedemann, J{\"o}rg and\n Malmasi, Shervin and\n Ali, Ahmed},\n month = aug,\n year = \"2018\",\n address = \"Santa Fe, New Mexico, USA\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://www.aclweb.org/anthology/W18-3900\",\n}\n", "homepage": "https://github.com/kmi-linguistics/vardial2018", "license": "", "features": {"language_id": {"num_classes": 5, "names": ["AWA", "BRA", "MAG", "BHO", "HIN"], "names_file": null, "id": null, "_type": "ClassLabel"}, "text": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "ilist", "config_name": "default", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 14362998, "num_examples": 70351, "dataset_name": "ilist"}, "test": {"name": "test", "num_bytes": 2146857, "num_examples": 9692, "dataset_name": "ilist"}, "validation": {"name": "validation", "num_bytes": 2407643, "num_examples": 10329, "dataset_name": "ilist"}}, "download_checksums": {"https://raw.githubusercontent.com/kmi-linguistics/vardial2018/master/dataset/train.txt": {"num_bytes": 13870509, "checksum": "1bd3ae96dc17ce44278cff256972649b510d6d8595f420e95bc8284f207e2678"}, "https://raw.githubusercontent.com/kmi-linguistics/vardial2018/master/dataset/gold.txt": {"num_bytes": 2079009, "checksum": "72909da09ed1c1f3710c879ca5b69282e483ce60fe7d90497cfbca46016da704"}, "https://raw.githubusercontent.com/kmi-linguistics/vardial2018/master/dataset/dev.txt": {"num_bytes": 2335332, "checksum": "2ef7944502bb2ee49358873e5d9de241f0a8a8b8a9b88e3e8c37873afd783797"}}, "download_size": 18284850, "post_processing_size": null, "dataset_size": 18917498, "size_in_bytes": 37202348}}
dummy/0.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:52766145a64cced80bb6ba562dd810a29090a2aa24b0970f983f083bb83f2f53
3
+ size 1811
ilist.py ADDED
@@ -0,0 +1,102 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """Indo-Aryan Language Identification Shared Task Dataset"""
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import datasets
20
+
21
+
22
+ _CITATION = """\
23
+ @proceedings{ws-2018-nlp-similar,
24
+ title = "Proceedings of the Fifth Workshop on {NLP} for Similar Languages, Varieties and Dialects ({V}ar{D}ial 2018)",
25
+ editor = {Zampieri, Marcos and
26
+ Nakov, Preslav and
27
+ Ljube{\v{s}}i{\'c}, Nikola and
28
+ Tiedemann, J{\"o}rg and
29
+ Malmasi, Shervin and
30
+ Ali, Ahmed},
31
+ month = aug,
32
+ year = "2018",
33
+ address = "Santa Fe, New Mexico, USA",
34
+ publisher = "Association for Computational Linguistics",
35
+ url = "https://www.aclweb.org/anthology/W18-3900",
36
+ }
37
+ """
38
+
39
+ _DESCRIPTION = """\
40
+ This dataset is introduced in a task which aimed at identifying 5 closely-related languages of Indo-Aryan language family –
41
+ Hindi (also known as Khari Boli), Braj Bhasha, Awadhi, Bhojpuri, and Magahi.
42
+ """
43
+
44
+ _URL = "https://raw.githubusercontent.com/kmi-linguistics/vardial2018/master/dataset/{}.txt"
45
+
46
+
47
+ class Ilist(datasets.GeneratorBasedBuilder):
48
+ def _info(self):
49
+ return datasets.DatasetInfo(
50
+ description=_DESCRIPTION,
51
+ features=datasets.Features(
52
+ {
53
+ "language_id": datasets.ClassLabel(names=["AWA", "BRA", "MAG", "BHO", "HIN"]),
54
+ "text": datasets.Value("string"),
55
+ }
56
+ ),
57
+ supervised_keys=None,
58
+ homepage="https://github.com/kmi-linguistics/vardial2018",
59
+ citation=_CITATION,
60
+ )
61
+
62
+ def _split_generators(self, dl_manager):
63
+ filepaths = dl_manager.download_and_extract(
64
+ {
65
+ "train": _URL.format("train"),
66
+ "test": _URL.format("gold"),
67
+ "dev": _URL.format("dev"),
68
+ }
69
+ )
70
+
71
+ return [
72
+ datasets.SplitGenerator(
73
+ name=datasets.Split.TRAIN,
74
+ # These kwargs will be passed to _generate_examples
75
+ gen_kwargs={
76
+ "filepath": filepaths["train"],
77
+ },
78
+ ),
79
+ datasets.SplitGenerator(
80
+ name=datasets.Split.TEST,
81
+ # These kwargs will be passed to _generate_examples
82
+ gen_kwargs={
83
+ "filepath": filepaths["test"],
84
+ },
85
+ ),
86
+ datasets.SplitGenerator(
87
+ name=datasets.Split.VALIDATION,
88
+ # These kwargs will be passed to _generate_examples
89
+ gen_kwargs={
90
+ "filepath": filepaths["dev"],
91
+ },
92
+ ),
93
+ ]
94
+
95
+ def _generate_examples(self, filepath):
96
+ """ Yields examples. """
97
+ with open(filepath, "r", encoding="utf-8") as file:
98
+ for idx, row in enumerate(file):
99
+ row = row.strip("\n").split("\t")
100
+ if len(row) == 1:
101
+ continue
102
+ yield idx, {"language_id": row[1], "text": row[0]}