Spaces:
Running
Running
add SmolVLM and Ecosystem chart
Browse files
README.md
CHANGED
@@ -11,16 +11,16 @@ pinned: false
|
|
11 |
This is the home for smol models (SmolLM) and high quality pre-training datasets. We released:
|
12 |
|
13 |
- [FineWeb-Edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu): a filtered version of FineWeb dataset for educational content, paper available [here](https://huggingface.co/papers/2406.17557).
|
14 |
-
- [Cosmopedia](https://huggingface.co/datasets/HuggingFaceTB/cosmopedia): the largest open synthetic dataset, with 25B tokens and
|
15 |
- [Smollm-Corpus](https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus): the pre-training corpus of SmolLM: **Cosmopedia v0.2**, **FineWeb-Edu dedup** and **Python-Edu**. Blog post available [here](https://huggingface.co/blog/smollm).
|
16 |
- [SmolLM models](https://huggingface.co/collections/HuggingFaceTB/smollm-6695016cad7167254ce15966) and [SmolLM2 models](https://huggingface.co/collections/HuggingFaceTB/smollm2-checkpoints-6723884218bcda64b34d7db9): a series of strong small models in three sizes: 135M, 360M and 1.7B
|
17 |
-
|
18 |
|
19 |
**News ποΈ**
|
20 |
-
- SmolLM2: you can find our most capable model SmolLM2-1.7B here: https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct
|
21 |
- We released our SFT mix SmolTalk, a 1M samples synthetic dataset to improve instruction following, chat and reasoning: https://hf.co/datasets/HuggingFaceTB/smoltalk
|
|
|
22 |
|
23 |
<div align="center">
|
24 |
-
<img src="https://cdn-uploads.huggingface.co/production/uploads/61c141342aac764ce1654e43/
|
25 |
-
<p><em>Comparison of models finetuned on SmolTalk and Orca AgentInstruct 1M. For more details, refer to the <a href="https://huggingface.co/datasets/HuggingFaceTB/smoltalk" target="_blank">dataset card</a>.</em></p>
|
26 |
</div>
|
|
|
11 |
This is the home for smol models (SmolLM) and high quality pre-training datasets. We released:
|
12 |
|
13 |
- [FineWeb-Edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu): a filtered version of FineWeb dataset for educational content, paper available [here](https://huggingface.co/papers/2406.17557).
|
14 |
+
- [Cosmopedia](https://huggingface.co/datasets/HuggingFaceTB/cosmopedia): the largest open synthetic dataset, with 25B tokens and 30M samples. It contains synthetic textbooks, blog posts, and stories, posts generated by Mixtral. Blog post available [here](https://huggingface.co/blog/cosmopedia).
|
15 |
- [Smollm-Corpus](https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus): the pre-training corpus of SmolLM: **Cosmopedia v0.2**, **FineWeb-Edu dedup** and **Python-Edu**. Blog post available [here](https://huggingface.co/blog/smollm).
|
16 |
- [SmolLM models](https://huggingface.co/collections/HuggingFaceTB/smollm-6695016cad7167254ce15966) and [SmolLM2 models](https://huggingface.co/collections/HuggingFaceTB/smollm2-checkpoints-6723884218bcda64b34d7db9): a series of strong small models in three sizes: 135M, 360M and 1.7B
|
17 |
+
- [SmolVLM](https://huggingface.co/HuggingFaceTB/SmolVLM-Instruct): a 2 billion Vision Lnaguage Model (VLM) built for on-device inference. It uses SmolLM2-1.7B as a language backbone. Blog post available [here](https://huggingface.co/blog/smolvlm).
|
18 |
|
19 |
**News ποΈ**
|
20 |
+
- SmolLM2: you can find our most capable model SmolLM2-1.7B here: https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct and our training and evaluation toolkit at: https://github.com/huggingface/smollm
|
21 |
- We released our SFT mix SmolTalk, a 1M samples synthetic dataset to improve instruction following, chat and reasoning: https://hf.co/datasets/HuggingFaceTB/smoltalk
|
22 |
+
- SmolVLM: a lightweight 2B Vision Language Model available here https://huggingface.co/HuggingFaceTB/SmolVLM-Instruct
|
23 |
|
24 |
<div align="center">
|
25 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/61c141342aac764ce1654e43/RvHjdlRT5gGQt5mJuhXH9.png" width="900"/>
|
|
|
26 |
</div>
|