functional_code / README.md
dhuck's picture
Upload README.md with huggingface_hub
442b3e3
metadata
license: afl-3.0
task_categories:
  - text-generation
  - feature-extraction
tags:
  - Program Synthesis
  - code
pretty_name: Functional Code
size_categories:
  - 100K<n<1M
dataset_info:
  features:
    - name: _id
      dtype: string
    - name: repository
      dtype: string
    - name: name
      dtype: string
    - name: content
      dtype: string
    - name: license
      dtype: 'null'
    - name: download_url
      dtype: string
    - name: language
      dtype: string
    - name: comments
      dtype: string
    - name: code
      dtype: string
  splits:
    - name: train
      num_bytes: 7561888852
      num_examples: 611738
    - name: test
      num_bytes: 1876266819
      num_examples: 152935
  download_size: 3643404015
  dataset_size: 9438155671

Dataset Card for Dataset Name

Dataset Description

Collection of functional programming languages from GitHub.

  • Point of Contact: dhuck

Dataset Summary

This dataset is a collection of code examples of functional programming languages for code generation tasks. It was collected over a week long period in March 2023 as part of project in program synthesis.

Dataset Structure

Data Instances

{
  'id': str
  'repository': str
  'filename': str
  'license': str or Empty
  'language': str
  'content': str
}

Data Fields

  • id: SHA256 has of the content field. This ID scheme ensure that duplicate code examples via forks or other duplications are removed from the dataset.
  • 'repository': The repository that the file was pulled from. This can be used for any attribution or to check updated licensing issues for the code example.
  • 'filename': Filename of the code example from within the repository.
  • 'license': Licensing information of the repository. This can be empty and further work is likely necessary to parse licensing information from individual files.
  • 'language': Programming language of the file. For example, Haskell, Clojure, Lisp, etc...
  • 'content': Source code of the file. This is full text of the source with some cleaning as described in the Curation section below. While many examples are short, others can be extremely long. This field will like require preprocessing for end tasks.

Data Splits

More information to be provided at a later date. There are 157,218 test examples and 628,869 training examples. The split was created using scikit-learn' test_train_split function.

Dataset Creation

Curation Rationale

This dataset was put together for Programming Synthesis tasks. The majority of available datasets consist of imperative programming languages, while the program synthesis community has a rich history of methods using functional languages. This dataset aims to unify the two approaches by making a large training corpus of functional languages available to researchers.

Source Data

Initial Data Collection and Normalization

Code examples were collected in a similar manner to other existing programming language datasets. Each example was pulled from public repositories on GitHub over a week in March 2023. I performed this task by searching common file extensions of the target languages (Clojure, Elixir, Haskell, Lisp, OCAML, Racket and Scheme). The full source is included for each coding example, so padding or truncation will be necessary for any training tasks. Significant effort was made to remove any personal information from each coding example. For each code example, I removed any email address or websites using simple regex pattern matching. Spacy NER was used to identify proper names in the comments only. Any token which spanned a name was simply replaced with the token PERSON while email addresses and websites were dropped from each comment. Organizations and other information were left intact.

Who are the source language producers?

Each example contains the repository the code originated from, identifying the source of each example.

Personal and Sensitive Information

While great care was taken to remove proper names, email addresses, and websites, there may exist examples where pattern matching did not work. While I used the best spacy models available, I did witness false negatives on other tasks on other datasets. To ensure no personal information makes it into training data, it is advisable to remove all comments if the training task does not require them. I made several PR to the comment_parser python library to support the languages in this dataset. My version of the parsing library can be found at https://github.com/d-huck/comment_parser

Considerations for Using the Data

Social Impact of Dataset

[More Information Needed]

Discussion of Biases

While code itself may not contain bias, programmers can use offensive, racist, homophobic, transphobic, misogynistic, etc words for variable names. Further updates to this dataset library will investigate and address these issues. Comments in the code examples could also contain hateful speech. Models trained on this dataset may need additional training on toxicity to remove these tendencies from the output.

Other Known Limitations

The code present in this dataset has not been checked for quality in any way. It is possible and probable that several of the coding examples are of poor quality and do not actually compile or run in their target language. Furthermore, there exists a chance that some examples are not the language they claim to be, since github search matching is dependent only on the file extension and not the actual contents of any file.