Datasets:
license: cc-by-4.0
ComPile: A Large IR Dataset from Production Sources
About
Utilizing the LLVM compiler infrastructur shared by a number of languages, ComPile is a large dataset of LLVM IR. The dataset is generated from programming languages built on the shared LLVM infrastructure, including Rust, Swift, Julia, and C/C++, by hooking into LLVM code generation either through the language's package manager or the compiler directly to extract the dataset of intermediate representations from production grade programs using our dataset collection utility for the LLVM compilation infrastructure.
For an in-depth look at the statistical properties of dataset, please have a look at our arXiv preprint.
Usage
Using ComPile is relatively simple with HuggingFace's datasets
library. To load the dataset, you can simply
run the following in a Python interpreter or within a Python script:
from datasets import load_dataset
ds = load_dataset('llvm-ml/ComPile', split='train')
While this will just work, the download will take quite a while as datasets
by default will download
all 550GB+ within the dataset and cache it locally. Note that the data will be placed in the directory
specified by the environment variable HF_DATASETS_CACHE
, which defaults to ~/.cache/huggingface
.
You can also load the dataset in a streaming format, where no data is saved locally:
ds = load_dataset('llvm-ml/ComPile', split='train', streaming=True)
This makes experimentation much easier as no upfront large time investment is required, but is significantly slower than loading in the dataset from the local disk. For experimentation that requires more performance but might not require the whole dataset, you can also specify a portion of the dataset to download. For example, the following code will only download the first 10% of the dataset:
ds = load_dataset('llvm-ml/ComPile', split='train[:10%]')
Once the dataset has been loaded, the individual module files can be accessed by iterating through the dataset or accessing specific indices:
# We can iterate through the dataset
next(iter(ds))
# We can also access modules at specific indices
ds[0]
Filtering and map operations can also be efficiently applied using primitives available within the
HuggingFace datasets
library. More documentation is available here.
Dataset Format
Each row in the dataset consists of an individual LLVM-IR Module along with some metadata. There are six columns associated with each row:
content
- This column contains the raw bitcode that composes the module. This can be written to a.bc
file and manipulated using the standard llvm utilities or passed in directly through stdin if using something like Python'ssubprocess
.license_expression
- This column contains the SPDX expression describing the license of the project that the module came from.license_source
- This column describes the way thelicense_expression
was determined. This might indicate an individual package ecosystem (egspack
), license detection (eggo_license_detector
), or might also indicate manual curation (manual
).license_files
- This column contains an array of license files. These file names map to licenses included in/licenses/licenses-0.parquet
.package_source
- This column contains information on the package that the module was sourced from. This is typically a link to a tar archive or git repository from which the project was built, but might also contain a mapping to a specific package ecosystem that provides the source, such as Spack.language
- This column indicates the source language that the module was compiled from.
Dataset Size
Langauge | Raw Size | License Constraints | Deduplicated + License Constraints |
---|---|---|---|
C/C++ | 124GB | 47GB | 31GB |
C | N/A | N/A | 3GB |
C++ | N/A | N/A | 28GB |
Julia | 201GB | 179GB | 153GB |
Swift | 8GB | 7GB | 7GB |
Rust | 656GB | 443GB | 373GB |
Total | 989GB | 676GB | 564GB |
The raw size is the size obtained directly from building all the projects. The license constraints column shows the size per language after license information is taken into account. The last column shows the size when both license constraints and deduplication are taken into account, which is what is included in the dataset.
Licensing
The individual modules within the dataset are subject to the licenses of the projects that they come from. License information is available in each row, including the SPDX license expression, the license files, and also a link to the package source where license information can be further validated.
The curation of these modules is licensed under a CC-BY-4.0 license.