system HF staff commited on
Commit
b0b2cff
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,159 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - no-annotations
4
+ language_creators:
5
+ - machine-generated
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - apache-2-0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 100K<n<1M
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - other
18
+ task_ids:
19
+ - other-other-contextual-embeddings
20
+ ---
21
+
22
+ # Dataset Card for ethpy150open
23
+
24
+ ## Table of Contents
25
+ - [Dataset Description](#dataset-description)
26
+ - [Dataset Summary](#dataset-summary)
27
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
28
+ - [Languages](#languages)
29
+ - [Dataset Structure](#dataset-structure)
30
+ - [Data Instances](#data-instances)
31
+ - [Data Fields](#data-instances)
32
+ - [Data Splits](#data-instances)
33
+ - [Dataset Creation](#dataset-creation)
34
+ - [Curation Rationale](#curation-rationale)
35
+ - [Source Data](#source-data)
36
+ - [Annotations](#annotations)
37
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
38
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
39
+ - [Social Impact of Dataset](#social-impact-of-dataset)
40
+ - [Discussion of Biases](#discussion-of-biases)
41
+ - [Other Known Limitations](#other-known-limitations)
42
+ - [Additional Information](#additional-information)
43
+ - [Dataset Curators](#dataset-curators)
44
+ - [Licensing Information](#licensing-information)
45
+ - [Citation Information](#citation-information)
46
+
47
+ ## Dataset Description
48
+
49
+ - **Homepage:** https://www.sri.inf.ethz.ch/py150
50
+ - **Repository:** https://github.com/google-research-datasets/eth_py150_open
51
+ - **Paper:** https://proceedings.icml.cc/static/paper_files/icml/2020/5401-Paper.pdf
52
+ - **Leaderboard:** None
53
+ - **Point of Contact:** Aditya Kanade <kanade@iisc.ac.in>, Petros Maniatis <maniatis@google.com>
54
+
55
+ ### Dataset Summary
56
+
57
+ A redistributable subset of the [ETH Py150 corpus](https://www.sri.inf.ethz.ch/py150), introduced in the ICML 2020 paper ['Learning and Evaluating Contextual Embedding of Source Code'](https://proceedings.icml.cc/static/paper_files/icml/2020/5401-Paper.pdf)
58
+
59
+ ### Supported Tasks and Leaderboards
60
+
61
+ [More Information Needed]
62
+
63
+ ### Languages
64
+
65
+ English
66
+
67
+ ## Dataset Structure
68
+ List of dicts of
69
+ {
70
+ "filepath": The relative URL containing the path to the file on GitHub
71
+ "license": The license used for that specific file or repository
72
+ }
73
+
74
+ ### Data Instances
75
+
76
+ {
77
+ "filepath": "0rpc/zerorpc-python/setup.py",
78
+ "license": "mit"
79
+ },
80
+ {
81
+ "filepath": "0rpc/zerorpc-python/zerorpc/heartbeat.py",
82
+ "license": "mit"
83
+ },
84
+
85
+ ### Data Fields
86
+
87
+ - `filepath`: The relative URL containing the path to the file on GitHub
88
+ - `license`: The license used for that specific file or repository
89
+
90
+ ### Data Splits
91
+
92
+ | | Train | Valid | Test |
93
+ | ----- | ------- | ----- | ----- |
94
+ | Dataset Split | 74749 | 8302 | 41457 |
95
+
96
+ ## Dataset Creation
97
+ The original dataset is at https://www.sri.inf.ethz.ch/py150
98
+ ### Curation Rationale
99
+
100
+ To generate a more redistributable version of the dataset
101
+
102
+ ### Source Data
103
+
104
+ #### Initial Data Collection and Normalization
105
+
106
+ All the urls are filepaths relative to GitHub and the master branch was used as available at the time
107
+
108
+ #### Who are the source language producers?
109
+
110
+ [More Information Needed]
111
+
112
+ ### Annotations
113
+
114
+ #### Annotation process
115
+
116
+ [More Information Needed]
117
+
118
+ #### Who are the annotators?
119
+
120
+ [More Information Needed]
121
+
122
+ ### Personal and Sensitive Information
123
+
124
+ [More Information Needed]
125
+
126
+ ## Considerations for Using the Data
127
+
128
+ ### Social Impact of Dataset
129
+
130
+ [More Information Needed]
131
+
132
+ ### Discussion of Biases
133
+
134
+ [More Information Needed]
135
+
136
+ ### Other Known Limitations
137
+
138
+ [More Information Needed]
139
+
140
+ ## Additional Information
141
+
142
+ ### Dataset Curators
143
+
144
+ [More Information Needed]
145
+
146
+ ### Licensing Information
147
+
148
+ Apache License 2.0
149
+
150
+ ### Citation Information
151
+
152
+ @inproceedings{kanade2020learning,
153
+ title={Learning and Evaluating Contextual Embedding of Source Code},
154
+ author={Kanade, Aditya and Maniatis, Petros and Balakrishnan, Gogul and Shi, Kensen},
155
+ booktitle={International Conference on Machine Learning},
156
+ pages={5110--5121},
157
+ year={2020},
158
+ organization={PMLR}
159
+ }
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"eth_py150_open": {"description": "A redistributable subset of the ETH Py150 corpus, introduced in the ICML 2020 paper 'Learning and Evaluating Contextual Embedding of Source Code'\n", "citation": "@inproceedings{kanade2020learning,\n title={Learning and Evaluating Contextual Embedding of Source Code},\n author={Kanade, Aditya and Maniatis, Petros and Balakrishnan, Gogul and Shi, Kensen},\n booktitle={International Conference on Machine Learning},\n pages={5110--5121},\n year={2020},\n organization={PMLR}\n}\n", "homepage": "https://github.com/google-research-datasets/eth_py150_open", "license": "Apache License, Version 2.0", "features": {"filepath": {"dtype": "string", "id": null, "_type": "Value"}, "license": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": {"input": "filepath", "output": "license"}, "builder_name": "eth_py150_open", "config_name": "eth_py150_open", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 5414978, "num_examples": 74749, "dataset_name": "eth_py150_open"}, "test": {"name": "test", "num_bytes": 3006199, "num_examples": 41457, "dataset_name": "eth_py150_open"}, "validation": {"name": "validation", "num_bytes": 598524, "num_examples": 8302, "dataset_name": "eth_py150_open"}}, "download_checksums": {"https://raw.githubusercontent.com/google-research-datasets/eth_py150_open/master/train__manifest.json": {"num_bytes": 8330299, "checksum": "faa632baf3a3e3ba234cc917dacd07fb646995990c930c6b86598d4d10484ce9"}, "https://raw.githubusercontent.com/google-research-datasets/eth_py150_open/master/dev__manifest.json": {"num_bytes": 922321, "checksum": "974426ff7448e7afd1fd26375814b264132b3eb62d4a995458c23f36857b4821"}, "https://raw.githubusercontent.com/google-research-datasets/eth_py150_open/master/eval__manifest.json": {"num_bytes": 4623051, "checksum": "b9a3235cb7457dac4bbb0cb7b31bc39186d78c318ec82c376bf1b61e66868554"}}, "download_size": 13875671, "post_processing_size": null, "dataset_size": 9019701, "size_in_bytes": 22895372}}
dummy/eth_py150_open/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5ff1fea58d25d2f7f1f6d375ab344a179805d7b5149998a88138c257d9d42fcb
3
+ size 955
eth_py150_open.py ADDED
@@ -0,0 +1,135 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """A redistributable subset of the ETH Py150 corpus"""
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import json
20
+ import os
21
+
22
+ import datasets
23
+
24
+
25
+ _CITATION = """\
26
+ @inproceedings{kanade2020learning,
27
+ title={Learning and Evaluating Contextual Embedding of Source Code},
28
+ author={Kanade, Aditya and Maniatis, Petros and Balakrishnan, Gogul and Shi, Kensen},
29
+ booktitle={International Conference on Machine Learning},
30
+ pages={5110--5121},
31
+ year={2020},
32
+ organization={PMLR}
33
+ }
34
+ """
35
+
36
+
37
+ _DESCRIPTION = """\
38
+ A redistributable subset of the ETH Py150 corpus, introduced in the ICML 2020 paper 'Learning and Evaluating Contextual Embedding of Source Code'
39
+ """
40
+
41
+ # TODO: Add a link to an official homepage for the dataset here
42
+ _HOMEPAGE = "https://github.com/google-research-datasets/eth_py150_open"
43
+
44
+ # TODO: Add the licence for the dataset here if you can find it
45
+ _LICENSE = "Apache License, Version 2.0"
46
+
47
+ # TODO: Add link to the official dataset URLs here
48
+ # The HuggingFace dataset library don't host the datasets but only point to the original files
49
+ # This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
50
+ _URL = "https://raw.githubusercontent.com/google-research-datasets/eth_py150_open/master/"
51
+
52
+
53
+ # TODO: Name of the dataset usually match the script name with CamelCase instead of snake_case
54
+ class EthPy150Open(datasets.GeneratorBasedBuilder):
55
+ """A redistributable subset of the ETH Py150 corpus"""
56
+
57
+ VERSION = datasets.Version("1.1.0")
58
+
59
+ # This is an example of a dataset with multiple configurations.
60
+ # If you don't want/need to define several sub-sets in your dataset,
61
+ # just remove the BUILDER_CONFIG_CLASS and the BUILDER_CONFIGS attributes.
62
+
63
+ # If you need to make complex sub-parts in the datasets with configurable options
64
+ # You can create your own builder configuration class to store attribute, inheriting from datasets.BuilderConfig
65
+ # BUILDER_CONFIG_CLASS = MyBuilderConfig
66
+
67
+ # You will be able to load one or the other configurations in the following list with
68
+ # data = datasets.load_dataset('my_dataset', 'first_domain')
69
+ # data = datasets.load_dataset('my_dataset', 'second_domain')
70
+ BUILDER_CONFIGS = [
71
+ datasets.BuilderConfig(
72
+ name="eth_py150_open", version=VERSION, description="A subset of the original Py150 corpus"
73
+ ),
74
+ ]
75
+
76
+ def _info(self):
77
+ features = datasets.Features({"filepath": datasets.Value("string"), "license": datasets.Value("string")})
78
+ return datasets.DatasetInfo(
79
+ # This is the description that will appear on the datasets page.
80
+ description=_DESCRIPTION,
81
+ # This defines the different columns of the dataset and their types
82
+ features=features, # Here we define them above because they are different between the two configurations
83
+ # If there's a common (input, target) tuple from the features,
84
+ # specify them here. They'll be used if as_supervised=True in
85
+ # builder.as_dataset.
86
+ supervised_keys=("filepath", "license"),
87
+ # Homepage of the dataset for documentation
88
+ homepage=_HOMEPAGE,
89
+ # License for the dataset if available
90
+ license=_LICENSE,
91
+ # Citation for the dataset
92
+ citation=_CITATION,
93
+ )
94
+
95
+ def _split_generators(self, dl_manager):
96
+ """Returns SplitGenerators."""
97
+ # TODO: This method is tasked with downloading/extracting the data and defining the splits depending on the configuration
98
+ # If several configurations are possible (listed in BUILDER_CONFIGS), the configuration selected by the user is in self.config.name
99
+
100
+ # dl_manager is a datasets.download.DownloadManager that can be used to download and extract URLs
101
+ # It can accept any type or nested list/dict and will give back the same structure with the url replaced with path to local files.
102
+ # By default the archives will be extracted and a path to a cached folder where they are extracted is returned instead of the archive
103
+ urls = {
104
+ "train": _URL + "train__manifest.json",
105
+ "dev": _URL + "dev__manifest.json",
106
+ "test": _URL + "eval__manifest.json",
107
+ }
108
+ data_dir = dl_manager.download_and_extract(urls)
109
+ return [
110
+ datasets.SplitGenerator(
111
+ name=datasets.Split.TRAIN,
112
+ # These kwargs will be passed to _generate_examples
113
+ gen_kwargs={"filepath": os.path.join(data_dir["train"]), "split": "train"},
114
+ ),
115
+ datasets.SplitGenerator(
116
+ name=datasets.Split.TEST,
117
+ # These kwargs will be passed to _generate_examples
118
+ gen_kwargs={"filepath": os.path.join(data_dir["test"]), "split": "test"},
119
+ ),
120
+ datasets.SplitGenerator(
121
+ name=datasets.Split.VALIDATION,
122
+ # These kwargs will be passed to _generate_examples
123
+ gen_kwargs={"filepath": os.path.join(data_dir["dev"]), "split": "dev"},
124
+ ),
125
+ ]
126
+
127
+ def _generate_examples(self, filepath, split):
128
+ """ Yields examples. """
129
+ # TODO: This method will receive as arguments the `gen_kwargs` defined in the previous `_split_generators` method.
130
+ # It is in charge of opening the given file and yielding (key, example) tuples from the dataset
131
+ # The key is not important, it's more here for legacy reason (legacy from tfds)
132
+
133
+ with open(filepath, encoding="utf-8") as f:
134
+ for id_, row in enumerate(json.load(f)):
135
+ yield id_, row