How was the original fineweb filtered?
Great job and thanks! Could you help to explain how the original fineweb dataset was processed and filtered to produce this mini version?
I use ChatGPT to build a classifier model, the purpose is to eliminate short-term to medium-term knowledge that could be like a news, ads, and other not important information from the long-term knowledge that could be helpful for human. Then run the classifier to extract only the long-term/useful knowledge to produce this. That's why the process is long to run and remaining document is so small, btw it's still a work in progress repo.
That makes sense. Have you tried to use any local/open source model for this purpose? Guess this took a lot of $$$ LOL. Thanks again anyway for the work. It would be great if you could share the prompt.
Thanks but I'll leave that for expert and big folks like HF/Microsoft.