|
--- |
|
annotations_creators: |
|
- no-annotation |
|
language_creators: |
|
- machine-generated |
|
language: |
|
- code |
|
license: |
|
- other |
|
multilinguality: |
|
- multilingual |
|
size_categories: |
|
- 100K<n<1M |
|
- 10K<n<100K |
|
- 1M<n<10M |
|
source_datasets: |
|
- original |
|
task_categories: |
|
- text-generation |
|
- fill-mask |
|
task_ids: |
|
- language-modeling |
|
- masked-language-modeling |
|
paperswithcode_id: codesearchnet |
|
pretty_name: CodeSearchNet |
|
dataset_info: |
|
- config_name: all |
|
features: |
|
- name: repository_name |
|
dtype: string |
|
- name: func_path_in_repository |
|
dtype: string |
|
- name: func_name |
|
dtype: string |
|
- name: whole_func_string |
|
dtype: string |
|
- name: language |
|
dtype: string |
|
- name: func_code_string |
|
dtype: string |
|
- name: func_documentation_string |
|
dtype: string |
|
- name: func_code_url |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 3722956913 |
|
num_examples: 1880853 |
|
- name: test |
|
num_bytes: 196789933 |
|
num_examples: 100529 |
|
- name: validation |
|
num_bytes: 176665333 |
|
num_examples: 89154 |
|
download_size: 1374970394 |
|
dataset_size: 4096412179 |
|
- config_name: go |
|
features: |
|
- name: repository_name |
|
dtype: string |
|
- name: func_path_in_repository |
|
dtype: string |
|
- name: func_name |
|
dtype: string |
|
- name: whole_func_string |
|
dtype: string |
|
- name: language |
|
dtype: string |
|
- name: func_code_string |
|
dtype: string |
|
- name: func_documentation_string |
|
dtype: string |
|
- name: func_code_url |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 409170909 |
|
num_examples: 317832 |
|
- name: test |
|
num_bytes: 17800759 |
|
num_examples: 14291 |
|
- name: validation |
|
num_bytes: 15005438 |
|
num_examples: 14242 |
|
download_size: 150594843 |
|
dataset_size: 441977106 |
|
- config_name: java |
|
features: |
|
- name: repository_name |
|
dtype: string |
|
- name: func_path_in_repository |
|
dtype: string |
|
- name: func_name |
|
dtype: string |
|
- name: whole_func_string |
|
dtype: string |
|
- name: language |
|
dtype: string |
|
- name: func_code_string |
|
dtype: string |
|
- name: func_documentation_string |
|
dtype: string |
|
- name: func_code_url |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 908426737 |
|
num_examples: 454451 |
|
- name: test |
|
num_bytes: 51425767 |
|
num_examples: 26909 |
|
- name: validation |
|
num_bytes: 27050061 |
|
num_examples: 15328 |
|
download_size: 292501337 |
|
dataset_size: 986902565 |
|
- config_name: javascript |
|
features: |
|
- name: repository_name |
|
dtype: string |
|
- name: func_path_in_repository |
|
dtype: string |
|
- name: func_name |
|
dtype: string |
|
- name: whole_func_string |
|
dtype: string |
|
- name: language |
|
dtype: string |
|
- name: func_code_string |
|
dtype: string |
|
- name: func_documentation_string |
|
dtype: string |
|
- name: func_code_url |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 290274945 |
|
num_examples: 123889 |
|
- name: test |
|
num_bytes: 14699408 |
|
num_examples: 6483 |
|
- name: validation |
|
num_bytes: 18327918 |
|
num_examples: 8253 |
|
download_size: 120536692 |
|
dataset_size: 323302271 |
|
- config_name: php |
|
features: |
|
- name: repository_name |
|
dtype: string |
|
- name: func_path_in_repository |
|
dtype: string |
|
- name: func_name |
|
dtype: string |
|
- name: whole_func_string |
|
dtype: string |
|
- name: language |
|
dtype: string |
|
- name: func_code_string |
|
dtype: string |
|
- name: func_documentation_string |
|
dtype: string |
|
- name: func_code_url |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 955464342 |
|
num_examples: 523712 |
|
- name: test |
|
num_bytes: 50005248 |
|
num_examples: 28391 |
|
- name: validation |
|
num_bytes: 48431131 |
|
num_examples: 26015 |
|
download_size: 346362115 |
|
dataset_size: 1053900721 |
|
- config_name: python |
|
features: |
|
- name: repository_name |
|
dtype: string |
|
- name: func_path_in_repository |
|
dtype: string |
|
- name: func_name |
|
dtype: string |
|
- name: whole_func_string |
|
dtype: string |
|
- name: language |
|
dtype: string |
|
- name: func_code_string |
|
dtype: string |
|
- name: func_documentation_string |
|
dtype: string |
|
- name: func_code_url |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 1086892447 |
|
num_examples: 412178 |
|
- name: test |
|
num_bytes: 59417109 |
|
num_examples: 22176 |
|
- name: validation |
|
num_bytes: 64756973 |
|
num_examples: 23107 |
|
download_size: 435192611 |
|
dataset_size: 1211066529 |
|
- config_name: ruby |
|
features: |
|
- name: repository_name |
|
dtype: string |
|
- name: func_path_in_repository |
|
dtype: string |
|
- name: func_name |
|
dtype: string |
|
- name: whole_func_string |
|
dtype: string |
|
- name: language |
|
dtype: string |
|
- name: func_code_string |
|
dtype: string |
|
- name: func_documentation_string |
|
dtype: string |
|
- name: func_code_url |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 72727533 |
|
num_examples: 48791 |
|
- name: test |
|
num_bytes: 3441642 |
|
num_examples: 2279 |
|
- name: validation |
|
num_bytes: 3093812 |
|
num_examples: 2209 |
|
download_size: 29488621 |
|
dataset_size: 79262987 |
|
configs: |
|
- config_name: all |
|
data_files: |
|
- split: train |
|
path: all/train-* |
|
- split: test |
|
path: all/test-* |
|
- split: validation |
|
path: all/validation-* |
|
- config_name: go |
|
data_files: |
|
- split: train |
|
path: go/train-* |
|
- split: test |
|
path: go/test-* |
|
- split: validation |
|
path: go/validation-* |
|
- config_name: java |
|
data_files: |
|
- split: train |
|
path: java/train-* |
|
- split: test |
|
path: java/test-* |
|
- split: validation |
|
path: java/validation-* |
|
- config_name: javascript |
|
data_files: |
|
- split: train |
|
path: javascript/train-* |
|
- split: test |
|
path: javascript/test-* |
|
- split: validation |
|
path: javascript/validation-* |
|
- config_name: php |
|
data_files: |
|
- split: train |
|
path: php/train-* |
|
- split: test |
|
path: php/test-* |
|
- split: validation |
|
path: php/validation-* |
|
- config_name: python |
|
data_files: |
|
- split: train |
|
path: python/train-* |
|
- split: test |
|
path: python/test-* |
|
- split: validation |
|
path: python/validation-* |
|
- config_name: ruby |
|
data_files: |
|
- split: train |
|
path: ruby/train-* |
|
- split: test |
|
path: ruby/test-* |
|
- split: validation |
|
path: ruby/validation-* |
|
config_names: |
|
- all |
|
- go |
|
- java |
|
- javascript |
|
- php |
|
- python |
|
- ruby |
|
--- |
|
|
|
# CodeSearchNet |
|
|
|
This is an *unofficial* reupload of the [code_search_net](https://huggingface.co/datasets/code_search_net) dataset in the `parquet` format. I have also removed the columns `func_code_tokens`, `func_documentation_tokens`, and `split_name` as they are not relevant. The original repository relies on a Python module that is downloaded and executed to unpack the dataset, which is a potential security risk but importantly raises an annoying warning. As a plus, parquets load faster. |
|
|
|
Original model card: |
|
|
|
--- |
|
|
|
# Dataset Card for CodeSearchNet corpus |
|
|
|
## Table of Contents |
|
- [Dataset Description](#dataset-description) |
|
- [Dataset Summary](#dataset-summary) |
|
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) |
|
- [Languages](#languages) |
|
- [Dataset Structure](#dataset-structure) |
|
- [Data Instances](#data-instances) |
|
- [Data Fields](#data-fields) |
|
- [Data Splits](#data-splits) |
|
- [Dataset Creation](#dataset-creation) |
|
- [Curation Rationale](#curation-rationale) |
|
- [Source Data](#source-data) |
|
- [Annotations](#annotations) |
|
- [Personal and Sensitive Information](#personal-and-sensitive-information) |
|
- [Considerations for Using the Data](#considerations-for-using-the-data) |
|
- [Social Impact of Dataset](#social-impact-of-dataset) |
|
- [Discussion of Biases](#discussion-of-biases) |
|
- [Other Known Limitations](#other-known-limitations) |
|
- [Additional Information](#additional-information) |
|
- [Dataset Curators](#dataset-curators) |
|
- [Licensing Information](#licensing-information) |
|
- [Citation Information](#citation-information) |
|
- [Contributions](#contributions) |
|
|
|
## Dataset Description |
|
- **Homepage:** https://wandb.ai/github/CodeSearchNet/benchmark |
|
- **Repository:** https://github.com/github/CodeSearchNet |
|
- **Paper:** https://arxiv.org/abs/1909.09436 |
|
- **Data:** https://doi.org/10.5281/zenodo.7908468 |
|
- **Leaderboard:** https://wandb.ai/github/CodeSearchNet/benchmark/leaderboard |
|
|
|
### Dataset Summary |
|
|
|
CodeSearchNet corpus is a dataset of 2 milllion (comment, code) pairs from opensource libraries hosted on GitHub. It contains code and documentation for several programming languages. |
|
|
|
CodeSearchNet corpus was gathered to support the [CodeSearchNet challenge](https://wandb.ai/github/CodeSearchNet/benchmark), to explore the problem of code retrieval using natural language. |
|
|
|
### Supported Tasks and Leaderboards |
|
|
|
- `language-modeling`: The dataset can be used to train a model for modelling programming languages, which consists in building language models for programming languages. |
|
|
|
### Languages |
|
|
|
- Go **programming** language |
|
- Java **programming** language |
|
- Javascript **programming** language |
|
- PHP **programming** language |
|
- Python **programming** language |
|
- Ruby **programming** language |
|
|
|
## Dataset Structure |
|
|
|
### Data Instances |
|
|
|
A data point consists of a function code along with its documentation. Each data point also contains meta data on the function, such as the repository it was extracted from. |
|
``` |
|
{ |
|
'id': '0', |
|
'repository_name': 'organisation/repository', |
|
'func_path_in_repository': 'src/path/to/file.py', |
|
'func_name': 'func', |
|
'whole_func_string': 'def func(args):\n"""Docstring"""\n [...]', |
|
'language': 'python', |
|
'func_code_string': '[...]', |
|
'func_code_tokens': ['def', 'func', '(', 'args', ')', ...], |
|
'func_documentation_string': 'Docstring', |
|
'func_documentation_string_tokens': ['Docstring'], |
|
'split_name': 'train', |
|
'func_code_url': 'https://github.com/<org>/<repo>/blob/<hash>/src/path/to/file.py#L111-L150' |
|
} |
|
``` |
|
### Data Fields |
|
|
|
- `id`: Arbitrary number |
|
- `repository_name`: name of the GitHub repository |
|
- `func_path_in_repository`: tl;dr: path to the file which holds the function in the repository |
|
- `func_name`: name of the function in the file |
|
- `whole_func_string`: Code + documentation of the function |
|
- `language`: Programming language in whoch the function is written |
|
- `func_code_string`: Function code |
|
- `func_code_tokens`: Tokens yielded by Treesitter |
|
- `func_documentation_string`: Function documentation |
|
- `func_documentation_string_tokens`: Tokens yielded by Treesitter |
|
- `split_name`: Name of the split to which the example belongs (one of train, test or valid) |
|
- `func_code_url`: URL to the function code on Github |
|
|
|
### Data Splits |
|
|
|
Three splits are available: |
|
- train |
|
- test |
|
- valid |
|
|
|
## Dataset Creation |
|
|
|
### Curation Rationale |
|
|
|
[More Information Needed] |
|
|
|
### Source Data |
|
|
|
#### Initial Data Collection and Normalization |
|
|
|
All information can be retrieved in the [original technical review](https://arxiv.org/pdf/1909.09436.pdf) |
|
|
|
**Corpus collection**: |
|
|
|
Corpus has been collected from publicly available open-source non-fork GitHub repositories, using libraries.io to identify all projects which are used by at least one other project, and sort them by “popularity” as indicated by the number of stars and forks. |
|
|
|
Then, any projects that do not have a license or whose license does not explicitly permit the re-distribution of parts of the project were removed. Treesitter - GitHub's universal parser - has been used to then tokenize all Go, Java, JavaScript, Python, PHP and Ruby functions (or methods) using and, where available, their respective documentation text using a heuristic regular expression. |
|
|
|
**Corpus filtering**: |
|
|
|
Functions without documentation are removed from the corpus. This yields a set of pairs ($c_i$, $d_i$) where ci is some function documented by di. Pairs ($c_i$, $d_i$) are passed through the folllowing preprocessing tasks: |
|
|
|
- Documentation $d_i$ is truncated to the first full paragraph to remove in-depth discussion of function arguments and return values |
|
- Pairs in which $d_i$ is shorter than three tokens are removed |
|
- Functions $c_i$ whose implementation is shorter than three lines are removed |
|
- Functions whose name contains the substring “test” are removed |
|
- Constructors and standard extenion methods (eg `__str__` in Python or `toString` in Java) are removed |
|
- Duplicates and near duplicates functions are removed, in order to keep only one version of the function |
|
|
|
#### Who are the source language producers? |
|
|
|
OpenSource contributors produced the code and documentations. |
|
|
|
The dataset was gatherered and preprocessed automatically. |
|
|
|
### Annotations |
|
|
|
#### Annotation process |
|
|
|
[More Information Needed] |
|
|
|
#### Who are the annotators? |
|
|
|
[More Information Needed] |
|
|
|
### Personal and Sensitive Information |
|
|
|
[More Information Needed] |
|
|
|
## Considerations for Using the Data |
|
|
|
### Social Impact of Dataset |
|
|
|
[More Information Needed] |
|
|
|
### Discussion of Biases |
|
|
|
[More Information Needed] |
|
|
|
### Other Known Limitations |
|
|
|
[More Information Needed] |
|
|
|
## Additional Information |
|
|
|
### Dataset Curators |
|
|
|
[More Information Needed] |
|
|
|
### Licensing Information |
|
|
|
Each example in the dataset has is extracted from a GitHub repository, and each repository has its own license. Example-wise license information is not (yet) included in this dataset: you will need to find out yourself which license the code is using. |
|
|
|
### Citation Information |
|
|
|
@article{husain2019codesearchnet, |
|
title={{CodeSearchNet} challenge: Evaluating the state of semantic code search}, |
|
author={Husain, Hamel and Wu, Ho-Hsiang and Gazit, Tiferet and Allamanis, Miltiadis and Brockschmidt, Marc}, |
|
journal={arXiv preprint arXiv:1909.09436}, |
|
year={2019} |
|
} |
|
|
|
### Contributions |
|
|
|
Thanks to [@SBrandeis](https://github.com/SBrandeis) for adding this dataset. |