Datasets:

Languages:
English
Size Categories:
n>1T
ArXiv:
Tags:
DOI:
License:

Sample dataset?

#23
by dweb - opened

Would it be possible for you all to release a sample of this dataset? I realize that it's broken up into CC crawl dumps which can be downloaded individually. But, any one of those dumps won't be representative of the whole dataset.

Dolma has a "v1_6-sample" which is 16.4GB gzipped and around 10b tokens. I've found it very helpful. Could a sample around that size be made for FineWeb?

Oh, awesome. Great minds think alike 😏

I am working on 100m, 300m, and 1b token samples as well, controlling for diversity in document length and cluster quality after I run embeddings.

I personally don't feel any sampling with clustering or document length is necessary for this. Simple random subsamples of different sizes (with one around 10b) would suffice (for me).

HuggingFaceFW org
β€’
edited 26 days ago

Hi, we've just added subsets randomly sampled from the whole dataset with 10, 100 and 350 billion tokens: https://huggingface.co/datasets/HuggingFaceFW/fineweb#smaller-sample-versions

Hi, we've just added subsets randomly sampled from the whole dataset with 10, 100 and 350 billion tokens: https://huggingface.co/datasets/HuggingFaceFW/fineweb#smaller-sample-versions

Thank you so much for this sample set! However, if I want to pre-train my model with only ~2T tokens, are there any suggestion to choose the data subset among different dumps?

HuggingFaceFW org

Hi, we've just added subsets randomly sampled from the whole dataset with 10, 100 and 350 billion tokens: https://huggingface.co/datasets/HuggingFaceFW/fineweb#smaller-sample-versions

Thank you so much for this sample set! However, if I want to pre-train my model with only ~2T tokens, are there any suggestion to choose the data subset among different dumps?

We plan to release more details on this later but I would generally avoid dumps 2021-49 to 2023-14 inclusive. 2019-26 to 2021-43 are quite strong and the last two (2023-50 and 2024-10) are the best from our testing

Sign up or log in to comment