system HF staff commited on
Commit
c96033d
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

Files changed (5) hide show
  1. .gitattributes +27 -0
  2. README.md +150 -0
  3. dataset_infos.json +1 -0
  4. dummy/ast/0.0.0/dummy_data.zip +3 -0
  5. py_ast.py +155 -0
.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,150 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - machine-generated
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - code
8
+ licenses:
9
+ - 0bsd
10
+ - mit
11
+ multilinguality:
12
+ - monolingual
13
+ size_categories:
14
+ - 100K<n<1M
15
+ source_datasets:
16
+ - original
17
+ task_categories:
18
+ - sequence-modeling
19
+ task_ids:
20
+ - sequence-modeling-code-modeling
21
+ ---
22
+ # Dataset Card for [py_ast]
23
+
24
+ ## Table of Contents
25
+ - [Dataset Description](#dataset-description)
26
+ - [Dataset Summary](#dataset-summary)
27
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
28
+ - [Languages](#languages)
29
+ - [Dataset Structure](#dataset-structure)
30
+ - [Data Instances](#data-instances)
31
+ - [Data Fields](#data-instances)
32
+ - [Data Splits](#data-instances)
33
+ - [Dataset Creation](#dataset-creation)
34
+ - [Curation Rationale](#curation-rationale)
35
+ - [Source Data](#source-data)
36
+ - [Annotations](#annotations)
37
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
38
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
39
+ - [Social Impact of Dataset](#social-impact-of-dataset)
40
+ - [Discussion of Biases](#discussion-of-biases)
41
+ - [Other Known Limitations](#other-known-limitations)
42
+ - [Additional Information](#additional-information)
43
+ - [Dataset Curators](#dataset-curators)
44
+ - [Licensing Information](#licensing-information)
45
+ - [Citation Information](#citation-information)
46
+
47
+ ## Dataset Description
48
+
49
+ - **homepage**: [py150](https://www.sri.inf.ethz.ch/py150)
50
+ - **Paper**: [Probabilistic Model for Code with Decision Trees](https://dl.acm.org/doi/10.1145/3022671.2984041)
51
+ - **Leaderboard:**
52
+ - **Point of Contact:**
53
+
54
+ ### Dataset Summary
55
+
56
+ The dataset consists of parsed Parsed ASTs that were used to train and evaluate the DeepSyn tool.
57
+ The Python programs are collected from GitHub repositories
58
+ by removing duplicate files, removing project forks (copy of another existing repository)
59
+ ,keeping only programs that parse and have at most 30'000 nodes in the AST and
60
+ we aim to remove obfuscated files
61
+
62
+ ### Supported Tasks and Leaderboards
63
+
64
+ Code Representation, Unsupervised Learning
65
+ ### Languages
66
+
67
+ Python
68
+ ## Dataset Structure
69
+
70
+ ### Data Instances
71
+ A typical datapoint contains an AST of a python program, parsed.
72
+ The main key is `ast` wherein every program's AST is stored.
73
+ Each children would have,
74
+ `type` which will formulate the type of the node.
75
+ `children` which enumerates if a given node has children(non-empty list).
76
+ `value`, if the given node has any hardcoded value(else "N/A").
77
+ An example would be,
78
+ '''
79
+ [ {"type":"Module","children":[1,4]},{"type":"Assign","children":[2,3]},{"type":"NameStore","value":"x"},{"type":"Num","value":"7"}, {"type":"Print","children":[5]}, {"type":"BinOpAdd","children":[6,7]}, {"type":"NameLoad","value":"x"}, {"type":"Num","value":"1"} ]
80
+ '''
81
+ ### Data Fields
82
+ - `ast`: a list of dictionaries, wherein every dictionary is a node in the Abstract Syntax Tree.
83
+ - `type`: explains the type of the node.
84
+ - `children`: list of nodes which are children under the given
85
+ - `value`: hardcoded value, if the node holds an hardcoded value.
86
+
87
+ ### Data Splits
88
+
89
+ The data is split into a training and test set.
90
+ The final split sizes are as follow:
91
+
92
+ | | Tain | Valid |
93
+ | ----- | ------ | ----- |
94
+ | py_ast examples| 100000 | 50000 |
95
+ ## Dataset Creation
96
+ [More Information Needed]
97
+ ### Curation Rationale
98
+
99
+ [More Information Needed]
100
+
101
+ ### Source Data
102
+
103
+ #### Initial Data Collection and Normalization
104
+
105
+ [More Information Needed]
106
+
107
+ #### Who are the source language producers?
108
+
109
+ [More Information Needed]
110
+
111
+ ### Annotations
112
+
113
+ #### Annotation process
114
+
115
+ [More Information Needed]
116
+
117
+ #### Who are the annotators?
118
+
119
+ [More Information Needed]
120
+
121
+ ### Personal and Sensitive Information
122
+
123
+ [More Information Needed]
124
+
125
+ ## Considerations for Using the Data
126
+
127
+ ### Social Impact of Dataset
128
+
129
+ [More Information Needed]
130
+
131
+ ### Discussion of Biases
132
+
133
+ [More Information Needed]
134
+
135
+ ### Other Known Limitations
136
+
137
+ [More Information Needed]
138
+
139
+ ## Additional Information
140
+
141
+ ### Dataset Curators
142
+ Raychev, V., Bielik, P., and Vechev, M
143
+ ### Licensing Information
144
+ MIT, BSD and Apache
145
+ ### Citation Information
146
+ @InProceedings{OOPSLA ’16, ACM,
147
+ title = {Probabilistic Model for Code with Decision Trees.},
148
+ authors={Raychev, V., Bielik, P., and Vechev, M.},
149
+ year={2016}
150
+ }
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"ast": {"description": "dataset consisting of parsed Parsed ASTs that were used to train and\nevaluate the DeepSyn tool.\nThe Python programs are collected from GitHub repositories\nby removing duplicate files, removing project forks (copy of another existing repository)\n,keeping only programs that parse and have at most 30'000 nodes in the AST and\nwe aim to remove obfuscated files", "citation": "@InProceedings{OOPSLA \u201916, ACM,\ntitle = {Probabilistic Model for Code with Decision Trees.},\nauthors={Raychev, V., Bielik, P., and Vechev, M.},\nyear={2016}\n}\n", "homepage": "https://www.sri.inf.ethz.ch/py150", "license": "", "features": {"ast": {"feature": {"type": {"dtype": "string", "id": null, "_type": "Value"}, "value": {"dtype": "string", "id": null, "_type": "Value"}, "children": {"feature": {"dtype": "int32", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": {"input": "ast", "output": ""}, "builder_name": "py_ast", "config_name": "ast", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 1870790180, "num_examples": 100000, "dataset_name": "py_ast"}, "test": {"name": "test", "num_bytes": 907514993, "num_examples": 50000, "dataset_name": "py_ast"}}, "download_checksums": {"http://files.srl.inf.ethz.ch/data/py150.tar.gz": {"num_bytes": 526642289, "checksum": "4093b331d43c795e39fb5f156ccb7dcbb04c5d745d5e840c2d6926c11292dbd4"}}, "download_size": 526642289, "post_processing_size": null, "dataset_size": 2778305173, "size_in_bytes": 3304947462}}
dummy/ast/0.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:163e63e12068053391edbd0ee004a0c51a7961236476d8f6a2ec09c8faa234ac
3
+ size 918
py_ast.py ADDED
@@ -0,0 +1,155 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """TODO: Add a description here."""
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import json
20
+ import os
21
+
22
+ import datasets
23
+
24
+
25
+ # TODO: Add BibTeX citation
26
+ # Find for instance the citation on arxiv or on the dataset repo/website
27
+ _CITATION = """\
28
+ @InProceedings{OOPSLA ’16, ACM,
29
+ title = {Probabilistic Model for Code with Decision Trees.},
30
+ authors={Raychev, V., Bielik, P., and Vechev, M.},
31
+ year={2016}
32
+ }
33
+ """
34
+
35
+ # TODO: Add description of the dataset here
36
+ # You can copy an official description
37
+ _DESCRIPTION = """\
38
+ dataset consisting of parsed Parsed ASTs that were used to train and
39
+ evaluate the DeepSyn tool.
40
+ The Python programs are collected from GitHub repositories
41
+ by removing duplicate files, removing project forks (copy of another existing repository)
42
+ ,keeping only programs that parse and have at most 30'000 nodes in the AST and
43
+ we aim to remove obfuscated files"""
44
+
45
+ # TODO: Add a link to an official homepage for the dataset here
46
+ _HOMEPAGE = "https://www.sri.inf.ethz.ch/py150"
47
+ # TODO: Add the licence for the dataset here if you can find it
48
+ _LICENSE = ""
49
+
50
+ # TODO: Add link to the official dataset URLs here
51
+ # The HuggingFace dataset library don't host the datasets but only point to the original files
52
+ # This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
53
+ _URLs = {
54
+ "ast": "http://files.srl.inf.ethz.ch/data/py150.tar.gz",
55
+ }
56
+
57
+
58
+ # TODO: Name of the dataset usually match the script name with CamelCase instead of snake_case
59
+ class PyAst(datasets.GeneratorBasedBuilder):
60
+ """TODO: Short description of my dataset."""
61
+
62
+ VERSION = datasets.Version("1.0.0")
63
+
64
+ # This is an example of a dataset with multiple configurations.
65
+ # If you don't want/need to define several sub-sets in your dataset,
66
+ # just remove the BUILDER_CONFIG_CLASS and the BUILDER_CONFIGS attributes.
67
+
68
+ # If you need to make complex sub-parts in the datasets with configurable options
69
+ # You can create your own builder configuration class to store attribute, inheriting from datasets.BuilderConfig
70
+ # BUILDER_CONFIG_CLASS = MyBuilderConfig
71
+
72
+ # You will be able to load one or the other configurations in the following list with
73
+ # data = datasets.load_dataset('my_dataset', 'first_domain')
74
+ # data = datasets.load_dataset('my_dataset', 'second_domain')
75
+ BUILDER_CONFIGS = [
76
+ datasets.BuilderConfig(name="ast", description="This part of the dataset contains 150 parsed python ASTs."),
77
+ ]
78
+
79
+ DEFAULT_CONFIG_NAME = "ast" # It's not mandatory to have a default configuration. Just use one if it make sense.
80
+
81
+ def _info(self):
82
+ # TODO: This method pecifies the datasets.DatasetInfo object which contains informations and typings for the dataset
83
+ if self.config.name == "ast": # This is the name of the configuration selected in BUILDER_CONFIGS above
84
+ features = datasets.Features(
85
+ {
86
+ "ast": datasets.Sequence(
87
+ {
88
+ "type": datasets.Value("string"),
89
+ "value": datasets.Value("string"),
90
+ "children": datasets.Sequence(datasets.Value("int32")),
91
+ },
92
+ )
93
+ # These are the features of your dataset like images, labels ...
94
+ }
95
+ )
96
+ return datasets.DatasetInfo(
97
+ # This is the description that will appear on the datasets page.
98
+ description=_DESCRIPTION,
99
+ # This defines the different columns of the dataset and their types
100
+ features=features, # Here we define them above because they are different between the two configurations
101
+ # If there's a common (input, target) tuple from the features,
102
+ # specify them here. They'll be used if as_supervised=True in
103
+ # builder.as_dataset.
104
+ supervised_keys=["ast"],
105
+ # Homepage of the dataset for documentation
106
+ homepage=_HOMEPAGE,
107
+ # License for the dataset if available
108
+ license=_LICENSE,
109
+ # Citation for the dataset
110
+ citation=_CITATION,
111
+ )
112
+
113
+ def _split_generators(self, dl_manager):
114
+ """Returns SplitGenerators."""
115
+ # TODO: This method is tasked with downloading/extracting the data and defining the splits depending on the configuration
116
+ # If several configurations are possible (listed in BUILDER_CONFIGS), the configuration selected by the user is in self.config.name
117
+
118
+ # dl_manager is a datasets.download.DownloadManager that can be used to download and extract URLs
119
+ # It can accept any type or nested list/dict and will give back the same structure with the url replaced with path to local files.
120
+ # By default the archives will be extracted and a path to a cached folder where they are extracted is returned instead of the archive
121
+ my_urls = _URLs[self.config.name]
122
+ data_dir = dl_manager.download_and_extract(my_urls)
123
+ return [
124
+ datasets.SplitGenerator(
125
+ name=datasets.Split.TRAIN,
126
+ # These kwargs will be passed to _generate_examples
127
+ gen_kwargs={
128
+ "filepath": os.path.join(data_dir, "python100k_train.json"),
129
+ "split": "train",
130
+ },
131
+ ),
132
+ datasets.SplitGenerator(
133
+ name=datasets.Split.TEST,
134
+ # These kwargs will be passed to _generate_examples
135
+ gen_kwargs={"filepath": os.path.join(data_dir, "python50k_eval.json"), "split": "test"},
136
+ ),
137
+ ]
138
+
139
+ def _generate_examples(self, filepath, split):
140
+ """ Yields examples. """
141
+ # TODO: This method will receive as arguments the `gen_kwargs` defined in the previous `_split_generators` method.
142
+ # It is in charge of opening the given file and yielding (key, example) tuples from the dataset
143
+ # The key is not important, it's more here for legacy reason (legacy from tfds)
144
+
145
+ with open(filepath, encoding="utf-8") as f:
146
+ for id_, row in enumerate(f):
147
+ row_data = json.loads(row)
148
+ for node in row_data:
149
+ if "value" not in node:
150
+ node["value"] = "N/A"
151
+ if "children" not in node:
152
+ node["children"] = []
153
+ yield id_, {
154
+ "ast": row_data,
155
+ }