Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
Spanish
ArXiv:
Libraries:
Datasets
Dask
ouhenio commited on
Commit
ebe4fb2
1 Parent(s): 384996b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -24,21 +24,21 @@ configs:
24
  path: data/train-*
25
  ---
26
 
27
- # Red Pajama's High Quality Spanish subset
28
 
29
  ## What is this?
30
 
31
- The following is a high-quality dataset distilled from the Spanish subsection of [RedPajama-Data-v2](https://github.com/togethercomputer/RedPajama-Data), created using the methodology proposed for [FineWEB-Edu](https://arxiv.org/abs/2406.17557).
32
 
33
  ## Dataset creation
34
 
35
- In a nutshell, we use Llama-3.1-70B to grade the educational quality of various samples from the original dataset. Then, we used these 500K samples to train a classifier using an encoder-based model, so that it learns to assign a score from 0 to 5. Since this model is way cheaper to use than an LLM, we run it over the entire dataset, thus getting a high-quality section from it.
36
 
37
  Here is an overview of the architecture:
38
 
39
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/61b15c3f20037ec5d7c91aa6/H5xPOHy_4RhMEDtGvsnTE.png)
40
 
41
- For more detailed information on how this dataset was created, refer to [our implementation](https://github.com/latam-gpt/llm-data-eval).
42
 
43
  ## License
44
 
 
24
  path: data/train-*
25
  ---
26
 
27
+ # RedPajama's High Quality Spanish subset
28
 
29
  ## What is this?
30
 
31
+ The following is a high-quality dataset distilled from the Spanish subsection of [RedPajama-Data-v2](https://github.com/togethercomputer/RedPajama-Data), created using the methodology proposed in [FineWEB-Edu](https://arxiv.org/abs/2406.17557).
32
 
33
  ## Dataset creation
34
 
35
+ In a nutshell, we use Llama-3.1-70B to grade the educational quality of 550k samples from the original dataset. Then, we used these samples to train a encoder-based classifier, so that it learns to assign a score from 0 to 5. Since this model is cheaper to use than an GPT, we can run it at scale over the entire dataset, thus allowing us to filter a high-quality section from it.
36
 
37
  Here is an overview of the architecture:
38
 
39
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/61b15c3f20037ec5d7c91aa6/H5xPOHy_4RhMEDtGvsnTE.png)
40
 
41
+ For more detailed information on how this dataset was created, refer to [our open implementation](https://github.com/latam-gpt/llm-data-eval).
42
 
43
  ## License
44