albertvillanova HF staff commited on
Commit
0adab9c
1 Parent(s): 35dd448

Fix missing tags in dataset cards (#4908)

Browse files

Commit from https://github.com/huggingface/datasets/commit/c79e737b5247a908f6f63e4bcf9c62abd38ef6ee

Files changed (3) hide show
  1. README.md +26 -6
  2. dataset_infos.json +1 -1
  3. py_ast.py +1 -1
README.md CHANGED
@@ -16,10 +16,11 @@ size_categories:
16
  source_datasets:
17
  - original
18
  task_categories:
 
19
  - text-generation
20
  - fill-mask
21
- - text-generation
22
- - fill-mask
23
  - text-generation-other-code-modeling
24
  paperswithcode_id: null
25
  ---
@@ -52,16 +53,16 @@ paperswithcode_id: null
52
  ## Dataset Description
53
 
54
  - **homepage**: [py150](https://www.sri.inf.ethz.ch/py150)
55
- - **Paper**: [Probabilistic Model for Code with Decision Trees](https://dl.acm.org/doi/10.1145/3022671.2984041)
56
  - **Leaderboard:**
57
  - **Point of Contact:**
58
 
59
  ### Dataset Summary
60
 
61
- The dataset consists of parsed Parsed ASTs that were used to train and evaluate the DeepSyn tool.
62
  The Python programs are collected from GitHub repositories
63
- by removing duplicate files, removing project forks (copy of another existing repository)
64
- ,keeping only programs that parse and have at most 30'000 nodes in the AST and
65
  we aim to remove obfuscated files
66
 
67
  ### Supported Tasks and Leaderboards
@@ -155,6 +156,25 @@ authors={Raychev, V., Bielik, P., and Vechev, M.},
155
  year={2016}
156
  }
157
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
158
  ### Contributions
159
 
160
  Thanks to [@reshinthadithyan](https://github.com/reshinthadithyan) for adding this dataset.
16
  source_datasets:
17
  - original
18
  task_categories:
19
+ - text2text-generation
20
  - text-generation
21
  - fill-mask
22
+ task_ids:
23
+ - text2text-generation-other-code-generation
24
  - text-generation-other-code-modeling
25
  paperswithcode_id: null
26
  ---
53
  ## Dataset Description
54
 
55
  - **homepage**: [py150](https://www.sri.inf.ethz.ch/py150)
56
+ - **Paper**: [Probabilistic Model for Code with Decision Trees](https://www.semanticscholar.org/paper/Probabilistic-model-for-code-with-decision-trees-Raychev-Bielik/62e176977d439aac2e2d7eca834a7a99016dfcaf)
57
  - **Leaderboard:**
58
  - **Point of Contact:**
59
 
60
  ### Dataset Summary
61
 
62
+ The dataset consists of parsed ASTs that were used to train and evaluate the DeepSyn tool.
63
  The Python programs are collected from GitHub repositories
64
+ by removing duplicate files, removing project forks (copy of another existing repository),
65
+ keeping only programs that parse and have at most 30'000 nodes in the AST and
66
  we aim to remove obfuscated files
67
 
68
  ### Supported Tasks and Leaderboards
156
  year={2016}
157
  }
158
 
159
+ ```
160
+ @inproceedings{10.1145/2983990.2984041,
161
+ author = {Raychev, Veselin and Bielik, Pavol and Vechev, Martin},
162
+ title = {Probabilistic Model for Code with Decision Trees},
163
+ year = {2016},
164
+ isbn = {9781450344449},
165
+ publisher = {Association for Computing Machinery},
166
+ address = {New York, NY, USA},
167
+ url = {https://doi.org/10.1145/2983990.2984041},
168
+ doi = {10.1145/2983990.2984041},
169
+ booktitle = {Proceedings of the 2016 ACM SIGPLAN International Conference on Object-Oriented Programming, Systems, Languages, and Applications},
170
+ pages = {731–747},
171
+ numpages = {17},
172
+ keywords = {Code Completion, Decision Trees, Probabilistic Models of Code},
173
+ location = {Amsterdam, Netherlands},
174
+ series = {OOPSLA 2016}
175
+ }
176
+ ```
177
+
178
  ### Contributions
179
 
180
  Thanks to [@reshinthadithyan](https://github.com/reshinthadithyan) for adding this dataset.
dataset_infos.json CHANGED
@@ -1 +1 @@
1
- {"ast": {"description": "dataset consisting of parsed Parsed ASTs that were used to train and\nevaluate the DeepSyn tool.\nThe Python programs are collected from GitHub repositories\nby removing duplicate files, removing project forks (copy of another existing repository)\n,keeping only programs that parse and have at most 30'000 nodes in the AST and\nwe aim to remove obfuscated files", "citation": "@InProceedings{OOPSLA \u201916, ACM,\ntitle = {Probabilistic Model for Code with Decision Trees.},\nauthors={Raychev, V., Bielik, P., and Vechev, M.},\nyear={2016}\n}\n", "homepage": "https://www.sri.inf.ethz.ch/py150", "license": "", "features": {"ast": {"feature": {"type": {"dtype": "string", "id": null, "_type": "Value"}, "value": {"dtype": "string", "id": null, "_type": "Value"}, "children": {"feature": {"dtype": "int32", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": {"input": "ast", "output": ""}, "builder_name": "py_ast", "config_name": "ast", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 1870790180, "num_examples": 100000, "dataset_name": "py_ast"}, "test": {"name": "test", "num_bytes": 907514993, "num_examples": 50000, "dataset_name": "py_ast"}}, "download_checksums": {"http://files.srl.inf.ethz.ch/data/py150.tar.gz": {"num_bytes": 526642289, "checksum": "4093b331d43c795e39fb5f156ccb7dcbb04c5d745d5e840c2d6926c11292dbd4"}}, "download_size": 526642289, "post_processing_size": null, "dataset_size": 2778305173, "size_in_bytes": 3304947462}}
1
+ {"ast": {"description": "Dataset consisting of parsed ASTs that were used to train and\nevaluate the DeepSyn tool.\nThe Python programs are collected from GitHub repositories\nby removing duplicate files, removing project forks (copy of another existing repository)\n,keeping only programs that parse and have at most 30'000 nodes in the AST and\nwe aim to remove obfuscated files", "citation": "@InProceedings{OOPSLA \u201916, ACM,\ntitle = {Probabilistic Model for Code with Decision Trees.},\nauthors={Raychev, V., Bielik, P., and Vechev, M.},\nyear={2016}\n}\n", "homepage": "https://www.sri.inf.ethz.ch/py150", "license": "", "features": {"ast": {"feature": {"type": {"dtype": "string", "id": null, "_type": "Value"}, "value": {"dtype": "string", "id": null, "_type": "Value"}, "children": {"feature": {"dtype": "int32", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": {"input": "ast", "output": ""}, "builder_name": "py_ast", "config_name": "ast", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 1870790180, "num_examples": 100000, "dataset_name": "py_ast"}, "test": {"name": "test", "num_bytes": 907514993, "num_examples": 50000, "dataset_name": "py_ast"}}, "download_checksums": {"http://files.srl.inf.ethz.ch/data/py150.tar.gz": {"num_bytes": 526642289, "checksum": "4093b331d43c795e39fb5f156ccb7dcbb04c5d745d5e840c2d6926c11292dbd4"}}, "download_size": 526642289, "post_processing_size": null, "dataset_size": 2778305173, "size_in_bytes": 3304947462}}
py_ast.py CHANGED
@@ -33,7 +33,7 @@ year={2016}
33
  # TODO: Add description of the dataset here
34
  # You can copy an official description
35
  _DESCRIPTION = """\
36
- dataset consisting of parsed Parsed ASTs that were used to train and
37
  evaluate the DeepSyn tool.
38
  The Python programs are collected from GitHub repositories
39
  by removing duplicate files, removing project forks (copy of another existing repository)
33
  # TODO: Add description of the dataset here
34
  # You can copy an official description
35
  _DESCRIPTION = """\
36
+ Dataset consisting of parsed ASTs that were used to train and
37
  evaluate the DeepSyn tool.
38
  The Python programs are collected from GitHub repositories
39
  by removing duplicate files, removing project forks (copy of another existing repository)