Datasets:

Modalities:
Text
Formats:
parquet
Languages:
code
ArXiv:
Tags:
DOI:
License:
ludgerpaehler commited on
Commit
f7d680d
1 Parent(s): dff1b04

Big update of the README (#1)

Browse files

- Big update of the README (03ff099ca6ee5bc8d544d1dcd53a635e472ac63f)

Files changed (1) hide show
  1. README.md +53 -25
README.md CHANGED
@@ -1,23 +1,50 @@
1
  ---
 
 
 
2
  license: cc-by-4.0
 
 
 
 
 
 
 
 
 
3
  ---
4
 
5
- # ComPile: A Large IR Dataset from Production Sources
6
 
7
- ## About
8
 
9
- Utilizing the LLVM compiler infrastructur shared by a number of languages, ComPile is a large dataset of
10
- LLVM IR. The dataset is generated from programming languages built on the shared LLVM infrastructure, including Rust,
11
- Swift, Julia, and C/C++, by hooking into LLVM code generation either through the language's package manager or the
 
 
 
 
 
 
 
 
 
 
 
12
  compiler directly to extract the dataset of intermediate representations from production grade programs using our
13
  [dataset collection utility for the LLVM compilation infrastructure](https://doi.org/10.5281/zenodo.10155761).
14
 
15
- For an in-depth look at the statistical properties of dataset, please have a look at our [arXiv preprint](https://arxiv.org/abs/2309.15432).
16
 
17
- ## Usage
 
 
 
18
 
19
- Using ComPile is relatively simple with HuggingFace's `datasets` library. To load the dataset, you can simply
20
- run the following in a Python interpreter or within a Python script:
 
21
 
22
  ```python
23
  from datasets import load_dataset
@@ -25,18 +52,18 @@ from datasets import load_dataset
25
  ds = load_dataset('llvm-ml/ComPile', split='train')
26
  ```
27
 
28
- While this will just work, the download will take quite a while as `datasets` by default will download
29
- all 550GB+ within the dataset and cache it locally. Note that the data will be placed in the directory
30
- specified by the environment variable `HF_DATASETS_CACHE`, which defaults to `~/.cache/huggingface`.
31
-
32
- You can also load the dataset in a streaming format, where no data is saved locally:
33
 
34
  ```python
35
  ds = load_dataset('llvm-ml/ComPile', split='train', streaming=True)
36
  ```
37
 
38
- This makes experimentation much easier as no upfront large time investment is required, but is
39
- significantly slower than loading in the dataset from the local disk. For experimentation that
 
 
40
  requires more performance but might not require the whole dataset, you can also specify a portion
41
  of the dataset to download. For example, the following code will only download the first 10%
42
  of the dataset:
@@ -55,28 +82,29 @@ next(iter(ds))
55
  ds[0]
56
  ```
57
 
58
- Filtering and map operations can also be efficiently applied using primitives available within the
59
- HuggingFace `datasets` library. More documentation is available [here](https://huggingface.co/docs/datasets/index).
60
 
61
- ## Dataset Format
62
 
 
63
  Each row in the dataset consists of an individual LLVM-IR Module along with some metadata. There are
64
  six columns associated with each row:
65
 
66
- 1. `content` - This column contains the raw bitcode that composes the module. This can be written to a `.bc`
67
  file and manipulated using the standard llvm utilities or passed in directly through stdin if using something
68
  like Python's `subprocess`.
69
- 2. `license_expression` - This column contains the SPDX expression describing the license of the project that the
70
  module came from.
71
- 3. `license_source` - This column describes the way the `license_expression` was determined. This might indicate
72
  an individual package ecosystem (eg `spack`), license detection (eg `go_license_detector`), or might also indicate
73
  manual curation (`manual`).
74
- 4. `license_files` - This column contains an array of license files. These file names map to licenses included in
75
  `/licenses/licenses-0.parquet`.
76
- 5. `package_source` - This column contains information on the package that the module was sourced from. This is
77
  typically a link to a tar archive or git repository from which the project was built, but might also contain a
78
  mapping to a specific package ecosystem that provides the source, such as Spack.
79
- 6. `language` - This column indicates the source language that the module was compiled from.
80
 
81
  ## Dataset Size
82
 
 
1
  ---
2
+ annotations_creators: []
3
+ language:
4
+ - code
5
  license: cc-by-4.0
6
+ multilinguality:
7
+ - multilingual
8
+ pretty_name: ComPile
9
+ size_categories:
10
+ - unknown
11
+ source_datasets: []
12
+ task_categories:
13
+ - text-generation
14
+ task_ids: []
15
  ---
16
 
17
+ # Dataset Card for ComPile: A Large IR Dataset from Production Sources
18
 
19
+ ## Dataset Description
20
 
21
+ - **Homepage:** https://llvm-ml.github.io/ComPile/
22
+ - **Paper:** https://arxiv.org/abs/2309.15432
23
+ - **Leaderboard:** N/A
24
+
25
+ ### Changelog
26
+
27
+ |Release|Programming Languages|Description|
28
+ |-|-|-|
29
+ |v1.0| C/C++, Rust, Swift, Julia | Fine Tuning-scale dataset of 564GB of deduplicated LLVM IR |
30
+
31
+ ### Dataset Summary
32
+
33
+ ComPile contains over 500GB of permissively-licensed source code compiled to [LLVM](https://llvm.org) intermediate representation (IR) covering C/C++, Rust, Swift, and Julia.
34
+ The dataset was created by hooking into LLVM code generation either through the language's package manager or the
35
  compiler directly to extract the dataset of intermediate representations from production grade programs using our
36
  [dataset collection utility for the LLVM compilation infrastructure](https://doi.org/10.5281/zenodo.10155761).
37
 
38
+ ### Languages
39
 
40
+ The dataset contains **5 programming languages** as of v1.0.
41
+ ```
42
+ "c++", "c", "rust", "swift", "julia"
43
+ ```
44
 
45
+ ### Dataset Usage
46
+
47
+ To use ComPile we recommend HuggingFace's [datasets library](https://huggingface.co/docs/datasets/index). To e.g. load the dataset:
48
 
49
  ```python
50
  from datasets import load_dataset
 
52
  ds = load_dataset('llvm-ml/ComPile', split='train')
53
  ```
54
 
55
+ By default this will download the entirety of the 550GB+ dataset, and cache it locally at the directory
56
+ specified by the environment variable `HF_DATASETS_CACHE`, which defaults to `~/.cache/huggingface`. To
57
+ load the dataset in a streaming format, where the data is not saved locally:
 
 
58
 
59
  ```python
60
  ds = load_dataset('llvm-ml/ComPile', split='train', streaming=True)
61
  ```
62
 
63
+ For further arguments of `load_dataset`, please take a look at the
64
+ `loading a dataset` [documentation](https://huggingface.co/docs/datasets/load_hub), and
65
+ the `streaming` [documentation](https://huggingface.co/docs/datasets/stream). Bear in mind that
66
+ this is significantly slower than loading the dataset from a local storage. For experimentation that
67
  requires more performance but might not require the whole dataset, you can also specify a portion
68
  of the dataset to download. For example, the following code will only download the first 10%
69
  of the dataset:
 
82
  ds[0]
83
  ```
84
 
85
+ Filtering and map operations can be performed with the primitives available within the
86
+ HuggingFace `datasets` library.
87
 
88
+ ## Dataset Structure
89
 
90
+ ### Data Fields
91
  Each row in the dataset consists of an individual LLVM-IR Module along with some metadata. There are
92
  six columns associated with each row:
93
 
94
+ - `content` (string): This column contains the raw bitcode that composes the module. This can be written to a `.bc`
95
  file and manipulated using the standard llvm utilities or passed in directly through stdin if using something
96
  like Python's `subprocess`.
97
+ - `license_expression` (string): This column contains the SPDX expression describing the license of the project that the
98
  module came from.
99
+ - `license_source` (string): This column describes the way the `license_expression` was determined. This might indicate
100
  an individual package ecosystem (eg `spack`), license detection (eg `go_license_detector`), or might also indicate
101
  manual curation (`manual`).
102
+ - `license_files`: This column contains an array of license files. These file names map to licenses included in
103
  `/licenses/licenses-0.parquet`.
104
+ - `package_source` (string): This column contains information on the package that the module was sourced from. This is
105
  typically a link to a tar archive or git repository from which the project was built, but might also contain a
106
  mapping to a specific package ecosystem that provides the source, such as Spack.
107
+ - `language` (string): This column indicates the source language that the module was compiled from.
108
 
109
  ## Dataset Size
110