create readme
Browse files
README.md
ADDED
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Pretokenized GitHub Code Dataset
|
2 |
+
|
3 |
+
## Dataset Description
|
4 |
+
This is a pretokenized version of the Python files of the [GitHub Code dataset](https://huggingface.co/datasets/lvwerra/github-code), that consists of 115M code files from GitHub in 32 programming languages. We tokenized the dataset using BPE Tokenizer trained on code, available in this [repo](https://huggingface.co/lvwerra/codeparrot). Having a pretokenized dataset can speed up the training loop by not having to tokenize data at each batch call. We also include `ratio_char_token` which gives the ratio between the number of characters in a file and the number of tokens we get after tokenization, this ratio can be a good filter to detect outlier files.
|
5 |
+
|
6 |
+
### How to use it
|
7 |
+
To avoid downloading the whole dataset, you can make use of the streaming API of `datasets`. You can load and iterate through the dataset with the following two lines of code:
|
8 |
+
```python
|
9 |
+
from datasets import load_dataset
|
10 |
+
|
11 |
+
ds = load_dataset("loubnabnl/tokenized-github-code-python", streaming=True, split="train")
|
12 |
+
print(next(iter(ds)))
|
13 |
+
#OUTPUT:
|
14 |
+
{'input_ids': [504, 1639, 492,...,199, 504, 1639],
|
15 |
+
'ratio_char_token': 3.560888252148997
|
16 |
+
}
|
17 |
+
```
|