Datasets:

Modalities:
Text
Formats:
parquet
Languages:
code
ArXiv:
DOI:
Libraries:
Datasets
Dask
License:
ComPile / README.md
boomanaiden154's picture
Update README.md
a68aef4 verified
|
raw
history blame
No virus
6.7 kB
metadata
annotations_creators: []
language:
  - code
license: cc-by-4.0
multilinguality:
  - multilingual
pretty_name: ComPile
size_categories:
  - unknown
source_datasets: []
task_categories:
  - text-generation
task_ids: []

Dataset Card for ComPile: A Large IR Dataset from Production Sources

Table of Contents

Dataset Description

Changelog

Release Programming Languages Description
v1.0 C/C++, Rust, Swift, Julia Fine Tuning-scale dataset of 564GB of deduplicated LLVM IR

Dataset Summary

ComPile contains over 500GB of permissively-licensed source code compiled to LLVM intermediate representation (IR) covering C/C++, Rust, Swift, and Julia. The dataset was created by hooking into LLVM code generation either through the language's package manager or the compiler directly to extract the dataset of intermediate representations from production grade programs using our dataset collection utility for the LLVM compilation infrastructure.

Languages

The dataset contains 5 programming languages as of v1.0.

"c++", "c", "rust", "swift", "julia"

Dataset Usage

To use ComPile we recommend HuggingFace's datasets library. To e.g. load the dataset:

from datasets import load_dataset

ds = load_dataset('llvm-ml/ComPile', split='train')

By default this will download the entirety of the 550GB+ dataset, and cache it locally at the directory specified by the environment variable HF_DATASETS_CACHE, which defaults to ~/.cache/huggingface. To load the dataset in a streaming format, where the data is not saved locally:

ds = load_dataset('llvm-ml/ComPile', split='train', streaming=True)

For further arguments of load_dataset, please take a look at the loading a dataset documentation, and the streaming documentation. Bear in mind that this is significantly slower than loading the dataset from a local storage. For experimentation that requires more performance but might not require the whole dataset, you can also specify a portion of the dataset to download. For example, the following code will only download the first 10% of the dataset:

ds = load_dataset('llvm-ml/ComPile', split='train[:10%]')

Once the dataset has been loaded, the individual module files can be accessed by iterating through the dataset or accessing specific indices:

# We can iterate through the dataset
next(iter(ds))
# We can also access modules at specific indices
ds[0]

Filtering and map operations can be performed with the primitives available within the HuggingFace datasets library.

Dataset Structure

Data Fields

Each row in the dataset consists of an individual LLVM-IR Module along with some metadata. There are six columns associated with each row:

  • content (string): This column contains the raw bitcode that composes the module. This can be written to a .bc file and manipulated using the standard llvm utilities or passed in directly through stdin if using something like Python's subprocess.
  • license_expression (string): This column contains the SPDX expression describing the license of the project that the module came from.
  • license_source (string): This column describes the way the license_expression was determined. This might indicate an individual package ecosystem (eg spack), license detection (eg go_license_detector), or might also indicate manual curation (manual).
  • license_files: This column contains an array of license files. These file names map to licenses included in /licenses/licenses-0.parquet.
  • package_source (string): This column contains information on the package that the module was sourced from. This is typically a link to a tar archive or git repository from which the project was built, but might also contain a mapping to a specific package ecosystem that provides the source, such as Spack.
  • language (string): This column indicates the source language that the module was compiled from.

Dataset Size

Langauge Raw Size License Constraints Deduplicated + License Constraints
C/C++ 124GB 47GB 31GB
C N/A N/A 3GB
C++ N/A N/A 28GB
Julia 201GB 179GB 153GB
Swift 8GB 7GB 7GB
Rust 656GB 443GB 373GB
Total 989GB 676GB 564GB

The raw size is the size obtained directly from building all the projects. The license constraints column shows the size per language after license information is taken into account. The last column shows the size when both license constraints and deduplication are taken into account, which is what is included in the dataset.

Note that the sizes displayed here are of the compressed bitcode representation rather than textual IR. We see an expansion ratio of 2-5x, averaging around 4x when converting from compressed bitcode to textual IR.

Dataset Construction

Exact details on how the dataset is constructed are available in our paper describing the dataset. The packages for v1.0 of the dataset were downloaded and built on 1/12/24-1/13/24.

Licensing

The individual modules within the dataset are subject to the licenses of the projects that they come from. License information is available in each row, including the SPDX license expression, the license files, and also a link to the package source where license information can be further validated.

The curation of these modules is licensed under a CC-BY-4.0 license.

Contact Info

  1. Aiden Grossman (amgrossman@ucdavis.edu)
  2. Johannes Doerfert (doerfert1@llnl.gov)