Upload README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,60 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
# Stack v2 Clean — 200K Multi-Language Code Subset
|
| 3 |
+
|
| 4 |
+
A cleaned and filtered subset of [bigcode/the-stack-v2](https://huggingface.co/datasets/bigcode/the-stack-v2), prepared for continued pretraining and masked language modeling (MLM) on source code. The dataset contains 200,005 source files evenly distributed across five widely used programming languages.
|
| 5 |
+
|
| 6 |
+
## Dataset Summary
|
| 7 |
+
|
| 8 |
+
This dataset was assembled to support fine-tuning of encoder models such as ModernBERT on programming language data. It provides a balanced, deduplicated, and filtered collection of real-world source files drawn from public repositories indexed by Software Heritage and curated by the BigCode project.
|
| 9 |
+
|
| 10 |
+
| Language | Files | Approx. Size |
|
| 11 |
+
|-------------|---------|--------------|
|
| 12 |
+
| Python | 40,001 | 57.8 MB |
|
| 13 |
+
| JavaScript | 40,001 | 43.2 MB |
|
| 14 |
+
| Java | 40,001 | 41.6 MB |
|
| 15 |
+
| C++ | 40,001 | 84.3 MB |
|
| 16 |
+
| Go | 40,001 | 60.8 MB |
|
| 17 |
+
| **Total** | **200,005** | **~288 MB** |
|
| 18 |
+
|
| 19 |
+
## Data Fields
|
| 20 |
+
|
| 21 |
+
Each row contains the following fields:
|
| 22 |
+
|
| 23 |
+
- `blob_id`: Software Heritage blob identifier for the source file.
|
| 24 |
+
- `content_id`: Content hash used for deduplication.
|
| 25 |
+
- `repo_name`: Origin repository in `owner/name` format.
|
| 26 |
+
- `path`: File path within the source repository.
|
| 27 |
+
- `language`: Programming language label (Python, JavaScript, Java, C++, or Go).
|
| 28 |
+
- `extension`: File extension.
|
| 29 |
+
- `length_bytes`: Size of the original file in bytes.
|
| 30 |
+
- `license_type`: License classification reported by the source dataset.
|
| 31 |
+
- `content`: Full source code as a UTF-8 string.
|
| 32 |
+
|
| 33 |
+
## Data Collection and Cleaning
|
| 34 |
+
|
| 35 |
+
Files were sampled from the streaming version of `bigcode/the-stack-v2` with a fixed random seed and filtered through a two-stage pipeline. The first stage applied metadata filters to remove vendored or auto-generated files, files outside a 200-byte to 200-KB size range, and files with mismatched extensions. The second stage fetched the file content from the public Software Heritage S3 bucket and applied content-level filters covering encoding validity, line count and length distribution, alphanumeric and alphabetic character ratios, URL density, and comment density. Exact deduplication was performed using `content_id` during streaming. Comment-heavy files were preserved (up to a 95 percent threshold) since natural-language comments are valuable for MLM training. Approximately 30 to 50 percent of fetched files were rejected by content filters, which is consistent with the StarCoder and BigCode preprocessing literature.
|
| 36 |
+
|
| 37 |
+
## Intended Uses
|
| 38 |
+
|
| 39 |
+
The dataset is intended for research and educational use, particularly continued pretraining and masked language modeling on encoder architectures such as ModernBERT, CodeBERT, and similar models. It is suitable for tokenizer training, representation learning experiments, and small-to-medium scale code understanding tasks.
|
| 40 |
+
|
| 41 |
+
## Limitations and Considerations
|
| 42 |
+
|
| 43 |
+
This dataset is a relatively small sample of the full Stack v2 corpus and is not intended for training large code generation models from scratch. Files retain their original licenses as classified upstream, and users are responsible for verifying license compatibility with their downstream use cases. No personally identifiable information removal pass has been applied beyond what is present in the upstream Stack v2 release; users redistributing derivative artifacts should consider running a PII scrubbing pass such as `bigcode-pii` before publication. Near-duplicate detection (for example, MinHash-based) was not applied and may be beneficial for some training scenarios.
|
| 44 |
+
|
| 45 |
+
## Source and License
|
| 46 |
+
|
| 47 |
+
The underlying data originates from `bigcode/the-stack-v2`, which is governed by the BigCode project's terms of use and the original repository licenses of each source file. Users of this derivative dataset must comply with the upstream Stack v2 terms, available on the [original dataset page](https://huggingface.co/datasets/bigcode/the-stack-v2). The `license_type` field is preserved from the upstream dataset to support license-aware filtering.
|
| 48 |
+
|
| 49 |
+
## Citation
|
| 50 |
+
|
| 51 |
+
If you use this dataset, please cite the original Stack v2 release:
|
| 52 |
+
|
| 53 |
+
```
|
| 54 |
+
@article{lozhkov2024starcoder,
|
| 55 |
+
title={StarCoder 2 and The Stack v2: The Next Generation},
|
| 56 |
+
author={Lozhkov, Anton and others},
|
| 57 |
+
journal={arXiv preprint arXiv:2402.19173},
|
| 58 |
+
year={2024}
|
| 59 |
+
}
|
| 60 |
+
```
|