Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
English
ArXiv:
DOI:
Libraries:
Datasets
Dask
License:
loubnabnl HF staff commited on
Commit
ab8f77d
1 Parent(s): 75fa67c

remove todo

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -499,7 +499,7 @@ We fine-tuned a Bert-like regression model using these annotations, based on [Sn
499
  The classifier is available at: [https://huggingface.co/HuggingFaceFW/fineweb-edu-classifier/](HuggingFaceFW/fineweb-edu-classifier/)
500
 
501
  ### Filtering and results
502
- **Note**: You can find more details about the ablations and results in the FineWeb blog post (TODO).
503
 
504
  We investigated the impact of using different thresholds for the filtering and found that threshold 3 gave the best overall results. Although using a threshold higher than 3 improves performance on knowledge and reasoning intensive benchmarks, it significantly degrades performance on HellaSwag and PIQA.
505
 
 
499
  The classifier is available at: [https://huggingface.co/HuggingFaceFW/fineweb-edu-classifier/](HuggingFaceFW/fineweb-edu-classifier/)
500
 
501
  ### Filtering and results
502
+ **Note**: You can find more details about the ablations and results in the FineWeb [blog post](https://huggingface.co/spaces/HuggingFaceFW/blogpost-fineweb-v1).
503
 
504
  We investigated the impact of using different thresholds for the filtering and found that threshold 3 gave the best overall results. Although using a threshold higher than 3 improves performance on knowledge and reasoning intensive benchmarks, it significantly degrades performance on HellaSwag and PIQA.
505