system HF staff commited on
Commit
384b811
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,152 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - other-LDC User Agreement for Non-Members
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - n<1K
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - sequence-modeling
18
+ task_ids:
19
+ - language-modeling
20
+ ---
21
+
22
+ # Dataset Card for Penn Treebank
23
+
24
+ ## Table of Contents
25
+ - [Dataset Description](#dataset-description)
26
+ - [Dataset Summary](#dataset-summary)
27
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
28
+ - [Languages](#languages)
29
+ - [Dataset Structure](#dataset-structure)
30
+ - [Data Instances](#data-instances)
31
+ - [Data Fields](#data-instances)
32
+ - [Data Splits](#data-instances)
33
+ - [Dataset Creation](#dataset-creation)
34
+ - [Curation Rationale](#curation-rationale)
35
+ - [Source Data](#source-data)
36
+ - [Annotations](#annotations)
37
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
38
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
39
+ - [Social Impact of Dataset](#social-impact-of-dataset)
40
+ - [Discussion of Biases](#discussion-of-biases)
41
+ - [Other Known Limitations](#other-known-limitations)
42
+ - [Additional Information](#additional-information)
43
+ - [Dataset Curators](#dataset-curators)
44
+ - [Licensing Information](#licensing-information)
45
+ - [Citation Information](#citation-information)
46
+
47
+ ## Dataset Description
48
+
49
+ - **Homepage:** https://catalog.ldc.upenn.edu/LDC99T42
50
+
51
+ - **Repository:** 'https://raw.githubusercontent.com/wojzaremba/lstm/master/data/ptb.train.txt',
52
+ 'https://raw.githubusercontent.com/wojzaremba/lstm/master/data/ptb.valid.txt',
53
+ 'https://raw.githubusercontent.com/wojzaremba/lstm/master/data/ptb.test.txt'
54
+ - **Paper:** https://www.aclweb.org/anthology/J93-2004.pdf
55
+ - **Leaderboard:** [Needs More Information]
56
+ - **Point of Contact:** [Needs More Information]
57
+
58
+ ### Dataset Summary
59
+
60
+ This is the Penn Treebank Project: Release 2 CDROM, featuring a million words of 1989 Wall Street Journal material.
61
+ The rare words in this version are already replaced with <unk> token. The numbers are replaced with <N> token.
62
+
63
+ ### Supported Tasks and Leaderboards
64
+
65
+ Language Modelling
66
+
67
+ ### Languages
68
+
69
+ The text in the dataset is in American English
70
+
71
+ ## Dataset Structure
72
+
73
+ ### Data Instances
74
+
75
+ [Needs More Information]
76
+
77
+ ### Data Fields
78
+
79
+ [Needs More Information]
80
+
81
+ ### Data Splits
82
+
83
+ [Needs More Information]
84
+
85
+ ## Dataset Creation
86
+
87
+ ### Curation Rationale
88
+
89
+ [Needs More Information]
90
+
91
+ ### Source Data
92
+
93
+ #### Initial Data Collection and Normalization
94
+
95
+ [Needs More Information]
96
+
97
+ #### Who are the source language producers?
98
+
99
+ [Needs More Information]
100
+
101
+ ### Annotations
102
+
103
+ #### Annotation process
104
+
105
+ [Needs More Information]
106
+
107
+ #### Who are the annotators?
108
+
109
+ [Needs More Information]
110
+
111
+ ### Personal and Sensitive Information
112
+
113
+ [Needs More Information]
114
+
115
+ ## Considerations for Using the Data
116
+
117
+ ### Social Impact of Dataset
118
+
119
+ [Needs More Information]
120
+
121
+ ### Discussion of Biases
122
+
123
+ [Needs More Information]
124
+
125
+ ### Other Known Limitations
126
+
127
+ [Needs More Information]
128
+
129
+ ## Additional Information
130
+
131
+ ### Dataset Curators
132
+
133
+ [Needs More Information]
134
+
135
+ ### Licensing Information
136
+
137
+ [Needs More Information]
138
+
139
+ ### Citation Information
140
+
141
+ @article{marcus-etal-1993-building,
142
+ title = "Building a Large Annotated Corpus of {E}nglish: The {P}enn {T}reebank",
143
+ author = "Marcus, Mitchell P. and
144
+ Santorini, Beatrice and
145
+ Marcinkiewicz, Mary Ann",
146
+ journal = "Computational Linguistics",
147
+ volume = "19",
148
+ number = "2",
149
+ year = "1993",
150
+ url = "https://www.aclweb.org/anthology/J93-2004",
151
+ pages = "313--330",
152
+ }
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"penn_treebank": {"description": "This is the Penn Treebank Project: Release 2 CDROM, featuring a million words of 1989 Wall Street Journal material. This corpus has been annotated for part-of-speech (POS) information. In addition, over half of it has been annotated for skeletal syntactic structure.\n", "citation": "@article{marcus-etal-1993-building,\n title = \"Building a Large Annotated Corpus of {E}nglish: The {P}enn {T}reebank\",\n author = \"Marcus, Mitchell P. and\n Santorini, Beatrice and\n Marcinkiewicz, Mary Ann\",\n journal = \"Computational Linguistics\",\n volume = \"19\",\n number = \"2\",\n year = \"1993\",\n url = \"https://www.aclweb.org/anthology/J93-2004\",\n pages = \"313--330\",\n}\n", "homepage": "https://catalog.ldc.upenn.edu/LDC99T42", "license": "LDC User Agreement for Non-Members", "features": {"sentence": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "ptb_text_only", "config_name": "penn_treebank", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 5143706, "num_examples": 42068, "dataset_name": "ptb_text_only"}, "test": {"name": "test", "num_bytes": 453710, "num_examples": 3761, "dataset_name": "ptb_text_only"}, "validation": {"name": "validation", "num_bytes": 403156, "num_examples": 3370, "dataset_name": "ptb_text_only"}}, "download_checksums": {"https://raw.githubusercontent.com/wojzaremba/lstm/master/data/ptb.train.txt": {"num_bytes": 5101618, "checksum": "fcea919f6cf83f35d4d00c6cbf08040d13d4155226340912e2fef9c9c4102cbf"}, "https://raw.githubusercontent.com/wojzaremba/lstm/master/data/ptb.valid.txt": {"num_bytes": 399782, "checksum": "c9fe6985fe0d4ccb578183407d7668fc6066c20700cb4cf87d8ff1cc34df1bf2"}, "https://raw.githubusercontent.com/wojzaremba/lstm/master/data/ptb.test.txt": {"num_bytes": 449945, "checksum": "dd65dff31e70846b2a6030a87482edcd5d199130cdcfa1f3dccbb033728deee0"}}, "download_size": 5951345, "post_processing_size": null, "dataset_size": 6000572, "size_in_bytes": 11951917}}
dummy/penn_treebank/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:55382c3c5996d9b7d2427f0812c04ac2b2fbafeabfbf9898f2c53b2cf1d6b337
3
+ size 1577
ptb_text_only.py ADDED
@@ -0,0 +1,147 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """
16
+ Load the Penn Treebank dataset.
17
+
18
+ This is the Penn Treebank Project: Release 2 CDROM, featuring a million words of 1989 Wall
19
+ Street Journal material.
20
+ """
21
+
22
+ from __future__ import absolute_import, division, print_function
23
+
24
+ import datasets
25
+
26
+
27
+ # TODO: Add BibTeX citation
28
+ # Find for instance the citation on arxiv or on the dataset repo/website
29
+ _CITATION = """\
30
+ @article{marcus-etal-1993-building,
31
+ title = "Building a Large Annotated Corpus of {E}nglish: The {P}enn {T}reebank",
32
+ author = "Marcus, Mitchell P. and
33
+ Santorini, Beatrice and
34
+ Marcinkiewicz, Mary Ann",
35
+ journal = "Computational Linguistics",
36
+ volume = "19",
37
+ number = "2",
38
+ year = "1993",
39
+ url = "https://www.aclweb.org/anthology/J93-2004",
40
+ pages = "313--330",
41
+ }
42
+ """
43
+
44
+ # TODO: Add description of the dataset here
45
+ # You can copy an official description
46
+ _DESCRIPTION = """\
47
+ This is the Penn Treebank Project: Release 2 CDROM, featuring a million words of 1989 Wall Street Journal material. This corpus has been annotated for part-of-speech (POS) information. In addition, over half of it has been annotated for skeletal syntactic structure.
48
+ """
49
+
50
+ # TODO: Add a link to an official homepage for the dataset here
51
+ _HOMEPAGE = "https://catalog.ldc.upenn.edu/LDC99T42"
52
+
53
+ # TODO: Add the licence for the dataset here if you can find it
54
+ _LICENSE = "LDC User Agreement for Non-Members"
55
+
56
+ # TODO: Add link to the official dataset URLs here
57
+ # The HuggingFace dataset library don't host the datasets but only point to the original files
58
+ # This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
59
+ _URL = "https://raw.githubusercontent.com/wojzaremba/lstm/master/data/"
60
+ _TRAINING_FILE = "ptb.train.txt"
61
+ _DEV_FILE = "ptb.valid.txt"
62
+ _TEST_FILE = "ptb.test.txt"
63
+
64
+
65
+ class PtbTextOnlyConfig(datasets.BuilderConfig):
66
+ """BuilderConfig for PtbTextOnly"""
67
+
68
+ def __init__(self, **kwargs):
69
+ """BuilderConfig PtbTextOnly.
70
+ Args:
71
+ **kwargs: keyword arguments forwarded to super.
72
+ """
73
+ super(PtbTextOnlyConfig, self).__init__(**kwargs)
74
+
75
+
76
+ class PtbTextOnly(datasets.GeneratorBasedBuilder):
77
+ """Load the Penn Treebank dataset."""
78
+
79
+ VERSION = datasets.Version("1.1.0")
80
+
81
+ # This is an example of a dataset with multiple configurations.
82
+ # If you don't want/need to define several sub-sets in your dataset,
83
+ # just remove the BUILDER_CONFIG_CLASS and the BUILDER_CONFIGS attributes.
84
+
85
+ # If you need to make complex sub-parts in the datasets with configurable options
86
+ # You can create your own builder configuration class to store attribute, inheriting from datasets.BuilderConfig
87
+ # BUILDER_CONFIG_CLASS = MyBuilderConfig
88
+
89
+ # You will be able to load one or the other configurations in the following list with
90
+ # data = datasets.load_dataset('my_dataset', 'first_domain')
91
+ # data = datasets.load_dataset('my_dataset', 'second_domain')
92
+ BUILDER_CONFIGS = [
93
+ PtbTextOnlyConfig(
94
+ name="penn_treebank",
95
+ version=VERSION,
96
+ description="Load the Penn Treebank dataset",
97
+ ),
98
+ ]
99
+
100
+ def _info(self):
101
+ features = datasets.Features({"sentence": datasets.Value("string")})
102
+ return datasets.DatasetInfo(
103
+ # This is the description that will appear on the datasets page.
104
+ description=_DESCRIPTION,
105
+ # This defines the different columns of the dataset and their types
106
+ features=features, # Here we define them above because they are different between the two configurations
107
+ # If there's a common (input, target) tuple from the features,
108
+ # specify them here. They'll be used if as_supervised=True in
109
+ # builder.as_dataset.
110
+ supervised_keys=None,
111
+ # Homepage of the dataset for documentation
112
+ homepage=_HOMEPAGE,
113
+ # License for the dataset if available
114
+ license=_LICENSE,
115
+ # Citation for the dataset
116
+ citation=_CITATION,
117
+ )
118
+
119
+ def _split_generators(self, dl_manager):
120
+ """Returns SplitGenerators."""
121
+ # TODO: This method is tasked with downloading/extracting the data and defining the splits depending on the configuration
122
+ # If several configurations are possible (listed in BUILDER_CONFIGS), the configuration selected by the user is in self.config.name
123
+
124
+ # dl_manager is a datasets.download.DownloadManager that can be used to download and extract URLs
125
+ # It can accept any type or nested list/dict and will give back the same structure with the url replaced with path to local files.
126
+ # By default the archives will be extracted and a path to a cached folder where they are extracted is returned instead of the archive
127
+ my_urls = {
128
+ "train": f"{_URL}{_TRAINING_FILE}",
129
+ "dev": f"{_URL}{_DEV_FILE}",
130
+ "test": f"{_URL}{_TEST_FILE}",
131
+ }
132
+ data_dir = dl_manager.download_and_extract(my_urls)
133
+ return [
134
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": data_dir["train"]}),
135
+ datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": data_dir["test"]}),
136
+ datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": data_dir["dev"]}),
137
+ ]
138
+
139
+ def _generate_examples(self, filepath):
140
+ """ Yields examples. """
141
+ # TODO: This method will receive as arguments the `gen_kwargs` defined in the previous `_split_generators` method.
142
+ # It is in charge of opening the given file and yielding (key, example) tuples from the dataset
143
+ # The key is not important, it's more here for legacy reason (legacy from tfds)
144
+ with open(filepath, encoding="utf-8") as f:
145
+ for id_, line in enumerate(f):
146
+ line = line.strip()
147
+ yield id_, {"sentence": line}