Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
Spanish
ArXiv:
Libraries:
Datasets
Dask
ouhenio commited on
Commit
1cbcda2
1 Parent(s): ebe4fb2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +21 -0
README.md CHANGED
@@ -30,6 +30,27 @@ configs:
30
 
31
  The following is a high-quality dataset distilled from the Spanish subsection of [RedPajama-Data-v2](https://github.com/togethercomputer/RedPajama-Data), created using the methodology proposed in [FineWEB-Edu](https://arxiv.org/abs/2406.17557).
32
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  ## Dataset creation
34
 
35
  In a nutshell, we use Llama-3.1-70B to grade the educational quality of 550k samples from the original dataset. Then, we used these samples to train a encoder-based classifier, so that it learns to assign a score from 0 to 5. Since this model is cheaper to use than an GPT, we can run it at scale over the entire dataset, thus allowing us to filter a high-quality section from it.
 
30
 
31
  The following is a high-quality dataset distilled from the Spanish subsection of [RedPajama-Data-v2](https://github.com/togethercomputer/RedPajama-Data), created using the methodology proposed in [FineWEB-Edu](https://arxiv.org/abs/2406.17557).
32
 
33
+ ## Usage
34
+
35
+ ```python
36
+ from datasets import load_dataset
37
+
38
+ ds = load_dataset("latam-gpt/red_pajama_es_hq")
39
+ ```
40
+
41
+ ### Filtering by quality score
42
+
43
+ Documents in this corpus are scored on academic quality from 2.5 to 5, with higher scores indicating better quality. The dataset can be filtered by score using standard filtering methods.
44
+
45
+ ```python
46
+ from datasets import load_dataset
47
+
48
+ ds = load_dataset("latam-gpt/red_pajama_es_hq")
49
+
50
+ # filter the dataset for scores > 3
51
+ filtered_ds = ds.filter(lambda x: x['score'] > 3)
52
+ ```
53
+
54
  ## Dataset creation
55
 
56
  In a nutshell, we use Llama-3.1-70B to grade the educational quality of 550k samples from the original dataset. Then, we used these samples to train a encoder-based classifier, so that it learns to assign a score from 0 to 5. Since this model is cheaper to use than an GPT, we can run it at scale over the entire dataset, thus allowing us to filter a high-quality section from it.