Datasets:
license: apache-2.0
size_categories:
- 10M<n<100M
task_categories:
- text-generation
dataset_info:
features:
- name: code
dtype: string
- name: docstring
dtype: string
- name: _id
dtype: string
splits:
- name: train
num_bytes: 18759198502
num_examples: 23526586
download_size: 8549378238
dataset_size: 18759198502
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
Dataset Card for Python-Text2Code
This dataset supports the EACL paper Text-to-Code Generation with Modality-relative Pre-training
- Repository: https://github.com/huawei-noah/noah-research/tree/master/NLP/text2code_mrpt
- Point of Contact: Fenia Christopoulou, Gerasimos Lampouras
Dataset Description
The data were crawled from existing, public repositories from GitHub before May 2021 and were meant to be used for additional model training for the task of Code Synthesis (i.e. Text-to-Code generation) in Python.
Details
Files that met the following criteria were kept: (a) the file size is under 1MB; (b) the code is Python3 compatible, using Abstract Syntactic Tree (AST) parsing; (c) there are fewer than 100 characters per line on average; (d) and there are fewer than 1,000 characters in any single line.
We applied AST parsing (via Tree-sitter) on the remaining Python files to extract valid functions and
their corresponding docstrings.
Docstrings were used a "problem descriptions" and were separated from the code. Functions without a docstring were discarded.
We replaced new lines, indentation and dedentation with <NEW_LINE>
, <INDENT>
and <DEDENT>
, respectively, to normalise spaces, which effectively reduced the length
of the sequences.
Finally, only instances with a maximum length of 1024 tokens (docstring+code) were kept.
The final dataset contains 23,526,586 text-to-code pairs in Python.
Check the paper for additional details!
Data Fields
Each instance contains 3 fields:
id
: Unique ID of each paircode
: The python codedocstring
: The docstring/problem description associated with this code
Data Splits
There is a single data split in the dataset. We randomly sampled 0.1% of the dataset to serve as validation set.
Citation
BibTeX:
@inproceedings{christopoulou-etal-2024-text,
title = "Text-to-Code Generation with Modality-relative Pre-training",
author = "Christopoulou, Fenia and
Zhang, Guchun and
Lampouras, Gerasimos",
editor = "Graham, Yvette and
Purver, Matthew",
booktitle = "Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = mar,
year = "2024",
address = "St. Julian{'}s, Malta",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.eacl-long.72",
pages = "1194--1208"
}