README / README.md
loubnabnl's picture
loubnabnl HF staff
Update README.md
1f7f07d verified
|
raw
history blame
1.49 kB
metadata
title: README
emoji: πŸ‘
colorFrom: purple
colorTo: green
sdk: static
pinned: false

HuggingFaceTB

This is the home of synthetic datasets for pre-training, such as Cosmopedia v1 and v2. We're trying to scale synthetic data generation by curating diverse prompts that cover a wide range of topics and efficiently scaling the generations on GPUs with tools like llm-swarm.

We released:

  • Cosmopedia: the largest open synthetic dataset, with 25B tokens and more than 30M samples. It contains synthetic textbooks, blog posts, stories, posts, and WikiHow articles generated by Mixtral-8x7B-Instruct-v0.1.
  • Cosmo-1B a 1B model trained on Cosmopedia.
  • FineWeb-Edu: a filtered version of FineWeb dataset for educational content
  • SmolLM models: a series of strong small models in three sizes: 135M, 360M and 1.7B
  • Smollm-Corpus: the pre-training corpus of SmolLM models including Cosmopedia v0.2, FineWeb-Edu and Python-Edu.

For more details check our blog posts: https://huggingface.co/blog/cosmopedia and https://huggingface.co/blog/smollm