GH_text2code / README.md
blindsubmissions's picture
Update README.md
baa5e5c
---
dataset_info:
features:
- name: identifier
dtype: string
- name: parameters
dtype: string
- name: docstring
dtype: string
- name: docstring_summary
dtype: string
- name: function
dtype: string
- name: function_tokens
sequence: string
- name: start_point
sequence: int64
- name: end_point
sequence: int64
- name: language
dtype: string
- name: docstring_language
dtype: string
- name: docstring_language_predictions
dtype: string
- name: is_langid_reliable
dtype: string
splits:
- name: python_gh
num_bytes: 36300760423
num_examples: 15000002
- name: java_gh
num_bytes: 21613057110
num_examples: 15000014
- name: go_gh
num_bytes: 22559741937
num_examples: 15000078
- name: javascript_gh
num_bytes: 3895688311
num_examples: 2000040
download_size: 166324499
dataset_size: 84369247781
task_categories:
- translation
- summarization
- text2text-generation
language:
- en
tags:
- code
size_categories:
- 10M<n<100M
---
# Docstring to code data
## Dataset Summary
This dataset contains pairs of English text and code from multiple programming language pairs. Namely, text is paired with code snippets for: Python, Java, JavaScript, and Go. The data is curated via an automated filtering pipeline from source files within [The Stack](https://huggingface.co/datasets/bigcode/the-stack).
## Supported Tasks
This dataset can be used to finetune models for code-to-text and/or text-to-code models, both on information retrieval or conditional generation settings.
## Splits
```python
DATA_SPLITS = {"python_gh", "java_gh", "javascript_gh", "go_gh"}
```
## How to get the data with a given programming language
```python
from datasets import load_dataset
def get_dataset(prog_lang):
test_data = load_dataset("blindsubmissions/GH_text2code", split=prog_lang)
return test_data
```
## Dataset Structure
### Data Instances
Each data instance corresponds to function/methods occurring in licensed files that compose The Stack. That is, files with permissive licences collected from GitHub.
### Relevant Data Fields
- identifier (string): Function/method name.
- parameters (string): Function parameters.
- return_statement (string): Return statement if found during parsing.
- docstring (string): Complete docstring content.
- docstring_summary (string): Summary/processed docstring dropping args and return statements.
- function (string): Actual function/method content.
- argument_list (null): List of arguments.
- language (string): Programming language of the function.
- type (string): Return type if found during parsing.
## Summary of data curation pipeline
- Filtering out repositories that appear in [CodeSearchNet](https://huggingface.co/datasets/code_search_net).
- Filtering the files that belong to the programming languages of interest.
- Pre-filtering the files that likely contain text in the natural languages of interest.
- AST parsing with [Tree-sitter](\url{https://tree-sitter.github.io/tree-sitter/).
- Perform language identification of docstrings in the resulting set of functions/methods and select the ones classified as English via majority voting.
## Social Impact of the dataset
This dataset is released with the aim to increase the availability of training data available to the NLP for code research community by providing text/code paired data. We expect this data to help enable more accurate information retrieval systems and text-to-code or code-to-text summarization.
As a subset of The Stack, this dataset inherits de-risking efforts carried out when that dataset was built, though we highlight risks exist and malicious use of the data could exist such as, for instance, to aid on creation of malicious code. We highlight however that this is a risk shared by any code dataset made openly available.
Moreover, we remark that the data may contain harmful or offensive language, which could be learned by models trained on it.
## Discussion of Biases
The data is collected from GitHub and naturally occurring text on that platform. As a consequence, certain languages are more or less likely to contain well documented code and, as such, resulting data will not be uniformly represented in terms of their programing languages.
## Known limitations
The dataset can be expanded to further improve its coverage.
Moreover, we use text naturally occurring as comments or docstrings as opposed to human annotators. As such, resulting data will have high variance in terms of quality depending on practices of sub-communities of software developers. However, we remark that the task our evaluation dataset defines is reflective of what searching on a real codebase would look like.
Finally, we note that some imbalance on data is observed due to the same reason since certain languages are more or less likely to contain well documented code.
## Maintenance plan:
The data will be kept up to date by following The Stack releases. We should rerun our pipeline for every new release and add non-overlapping new content to both training and testing partitions of our data.
This is so that we carry over opt-out updates and include fresh repos.
## Update plan:
- Cover all 6 programming languages from CodeSearchNet.
## Licensing Information
M2CRB is a subset filtered and pre-processed from [The Stack](https://huggingface.co/datasets/bigcode/the-stack), a collection of source code from repositories with various licenses. Any use of all or part of the code gathered in M2CRB must abide by the terms of the original licenses.