ammarnasr commited on
Commit
6b985ad
1 Parent(s): 7c2db53

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +37 -10
README.md CHANGED
@@ -1,4 +1,5 @@
1
  ---
 
2
  dataset_info:
3
  features:
4
  - name: hexsha
@@ -15,17 +16,43 @@ dataset_info:
15
  dtype: float64
16
  splits:
17
  - name: train
18
- num_bytes: 11704058396.971489
19
- num_examples: 893792
20
  - name: test
21
- num_bytes: 650224011.516795
22
- num_examples: 49655
23
  - name: valid
24
- num_bytes: 650237106.351384
25
- num_examples: 49656
26
- download_size: 3673273607
27
- dataset_size: 13004519514.839666
 
 
 
 
 
 
 
 
 
28
  ---
29
- # Dataset Card for "the-stack-rust-clean"
30
 
31
- [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: openrail
3
  dataset_info:
4
  features:
5
  - name: hexsha
 
16
  dtype: float64
17
  splits:
18
  - name: train
19
+ num_bytes: 3582248477.9086223
20
+ num_examples: 806789
21
  - name: test
22
+ num_bytes: 394048264.9973618
23
+ num_examples: 88747
24
  - name: valid
25
+ num_bytes: 3982797.09401595
26
+ num_examples: 897
27
+ download_size: 1323156008
28
+ dataset_size: 3980279540
29
+ task_categories:
30
+ - text-generation
31
+ language:
32
+ - code
33
+ tags:
34
+ - code
35
+ pretty_name: TheStack-Rust
36
+ size_categories:
37
+ - 1M<n<10M
38
  ---
 
39
 
40
+ ## Dataset 1: TheStack - Rust - Cleaned
41
+
42
+ **Description**: This dataset is drawn from TheStack Corpus, an open-source code dataset with over 3TB of GitHub data covering 48 programming languages. We selected a small portion of this dataset to optimize smaller language models for Rust, a popular statically typed language.
43
+
44
+ **Target Language**: Rust
45
+
46
+ **Dataset Size**:
47
+ - Training: 900,000 files
48
+ - Validation: 50,000 files
49
+ - Test: 50,000 files
50
+
51
+ **Preprocessing**:
52
+ 1. Selected Rust as the target language due to its popularity on GitHub.
53
+ 2. Filtered out files with average line length > 100 characters, maximum line length > 1000 characters, and alphabet ratio < 25%.
54
+ 3. Split files into 90% training, 5% validation, and 5% test sets.
55
+
56
+ **Tokenizer**: Byte Pair Encoding (BPE) tokenizer with tab and whitespace tokens. GPT-2 vocabulary extended with special tokens.
57
+
58
+ **Training Sequences**: Sequences constructed by joining training data text to reach a context length of 2048 tokens (1024 tokens for full fine-tuning).