nicholasKluge commited on
Commit
7a41702
·
verified ·
1 Parent(s): 7b43c43

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -2
README.md CHANGED
@@ -35,14 +35,13 @@ pretty_name: Pt-Corpus tokenized
35
  size_categories:
36
  - 1M<n<10M
37
  ---
38
-
39
  # Pt-Corpus-tokenized-2048
40
 
41
  Pt-Corpus is a concatenation of several portions of Brazilian Portuguese datasets found in the [Hub](https://huggingface.co/datasets?task_categories=task_categories:text-generation&language=language:pt&sort=trending).
42
 
43
  In a tokenized format, the dataset (uncompressed) weighs 50 GB and has approximately 4.1B tokens. This version does not have instructional content.
44
 
45
- This repository has a tokenized version (using the TeenyTinyLlama tokenizer) of the Pt-Corpus dataset. All sequences are 2048 tokens long.
46
 
47
  ## How to use
48
 
 
35
  size_categories:
36
  - 1M<n<10M
37
  ---
 
38
  # Pt-Corpus-tokenized-2048
39
 
40
  Pt-Corpus is a concatenation of several portions of Brazilian Portuguese datasets found in the [Hub](https://huggingface.co/datasets?task_categories=task_categories:text-generation&language=language:pt&sort=trending).
41
 
42
  In a tokenized format, the dataset (uncompressed) weighs 50 GB and has approximately 4.1B tokens. This version does not have instructional content.
43
 
44
+ This repository has a tokenized version (using the [TeenyTinyLlama tokenizer](https://huggingface.co/nicholasKluge/TeenyTinyLlama-460m)) of the [Pt-Corpus dataset](https://huggingface.co/datasets/nicholasKluge/Pt-Corpus). All sequences are 2048 tokens long.
45
 
46
  ## How to use
47