bagelnet commited on
Commit
6b585dc
0 Parent(s):

Super-squash branch 'main' using huggingface_hub

Browse files
.gitattributes ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,129 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ language:
5
+ - en
6
+ ---
7
+
8
+
9
+ # SmolLM2
10
+
11
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/61c141342aac764ce1654e43/XlT5TM3HWpfoZk_HSubrH.png)
12
+
13
+ ## Table of Contents
14
+
15
+ 1. [Model Summary](#model-summary)
16
+ 2. [Evaluation](#evaluation)
17
+ 3. [Limitations](#limitations)
18
+ 4. [Training](#training)
19
+ 5. [License](#license)
20
+ 6. [Citation](#citation)
21
+
22
+ ## Model Summary
23
+
24
+ SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters. They are capable of solving a wide range of tasks while being lightweight enough to run on-device.
25
+
26
+ The 1.7B variant demonstrates significant advances over its predecessor SmolLM1-1.7B, particularly in instruction following, knowledge, reasoning, and mathematics. It was trained on 11 trillion tokens using a diverse dataset combination: FineWeb-Edu, DCLM, The Stack, along with new mathematics and coding datasets that we curated and will release soon. We developed the instruct version through supervised fine-tuning (SFT) using a combination of public datasets and our own curated datasets. We then applied Direct Preference Optimization (DPO) using [UltraFeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized).
27
+
28
+ The instruct model additionally supports tasks such as text rewriting, summarization and function calling thanks to datasets developed by [Argilla](https://huggingface.co/argilla) such as [Synth-APIGen-v0.1](https://huggingface.co/datasets/argilla/Synth-APIGen-v0.1).
29
+
30
+ ### How to use
31
+
32
+ ```bash
33
+ pip install transformers
34
+ ```
35
+
36
+ #### Running the model on CPU/GPU/multi GPU
37
+ * _Using full precision_
38
+ ```python
39
+ # pip install transformers
40
+ from transformers import AutoModelForCausalLM, AutoTokenizer
41
+ checkpoint = "HuggingFaceTB/SmolLM2-1.7B"
42
+ device = "cuda" # for GPU usage or "cpu" for CPU usage
43
+ tokenizer = AutoTokenizer.from_pretrained(checkpoint)
44
+ # for multiple GPUs install accelerate and do `model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto")`
45
+ model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
46
+ inputs = tokenizer.encode("Gravity is", return_tensors="pt").to(device)
47
+ outputs = model.generate(inputs)
48
+ print(tokenizer.decode(outputs[0]))
49
+ ```
50
+
51
+ * _Using `torch.bfloat16`_
52
+ ```python
53
+ # pip install accelerate
54
+ # for fp16 use `torch_dtype=torch.float16` instead
55
+ model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto", torch_dtype=torch.bfloat16)
56
+ inputs = tokenizer.encode("Gravity is", return_tensors="pt").to("cuda")
57
+ outputs = model.generate(inputs)
58
+ print(tokenizer.decode(outputs[0]))
59
+ ```
60
+ ```bash
61
+ >>> print(f"Memory footprint: {model.get_memory_footprint() / 1e6:.2f} MB")
62
+ Memory footprint: 3422.76 MB
63
+ ```
64
+
65
+ ## Evaluation
66
+
67
+ In this section, we report the evaluation results of SmolLM2. All evaluations are zero-shot unless stated otherwise, and we use [lighteval](https://github.com/huggingface/lighteval) to run them.
68
+
69
+ ## Base Pre-Trained Model
70
+
71
+ | Metric | SmolLM2-1.7B | Llama-1B | Qwen2.5-1.5B | SmolLM1-1.7B |
72
+ |------------------|--------------|-------------|---------------|--------------|
73
+ | HellaSwag | **68.7** | 61.2 | 66.4 | 62.9 |
74
+ | ARC (Average) | **60.5** | 49.2 | 58.5 | 59.9 |
75
+ | PIQA | **77.6** | 74.8 | 76.1 | 76.0 |
76
+ | MMLU-Pro (MCF) | **19.4** | 11.7 | 13.7 | 10.8 |
77
+ | CommonsenseQA | **43.6** | 41.2 | 34.1 | 38.0 |
78
+ | TriviaQA | **36.7** | 28.1 | 20.9 | 22.5 |
79
+ | Winogrande | **59.4** | 57.8 | 59.3 | 54.7 |
80
+ | OpenBookQA | 42.2 | 38.4 | 40.0 | **42.4** |
81
+ | GSM8K (5-shot) | 31.0 | 7.2 | **61.3** | 5.5 |
82
+
83
+ ## Instruction Model
84
+
85
+ | Metric | SmolLM2-1.7B-Instruct | Llama-1B-Instruct | Qwen2.5-1.5B-Instruct | SmolLM1-1.7B-Instruct |
86
+ |:-----------------------------|:---------------------:|:-----------------:|:----------------------:|:----------------------:|
87
+ | IFEval (Average prompt/inst) | **56.7** | 53.5 | 47.4 | 23.1 |
88
+ | MT-Bench | 6.13 | 5.48 | **6.52** | 4.33 |
89
+ | OpenRewrite-Eval (micro_avg RougeL) | 44.9 | 39.2 | **46.9** | NaN |
90
+ | HellaSwag | **66.1** | 56.1 | 60.9 | 55.5 |
91
+ | ARC (Average) | **51.7** | 41.6 | 46.2 | 43.7 |
92
+ | PIQA | **74.4** | 72.3 | 73.2 | 71.6 |
93
+ | MMLU-Pro (MCF) | 19.3 | 12.7 | **24.2** | 11.7 |
94
+ | BBH (3-shot) | 32.2 | 27.6 | **35.3** | 25.7 |
95
+ | GSM8K (5-shot) | **48.2** | 26.8 | 42.8 | 4.62 |
96
+
97
+
98
+ ## Limitations
99
+
100
+ SmolLM2 models primarily understand and generate content in English. They can produce text on a variety of topics, but the generated content may not always be factually accurate, logically consistent, or free from biases present in the training data. These models should be used as assistive tools rather than definitive sources of information. Users should always verify important information and critically evaluate any generated content.
101
+
102
+ ## Training
103
+
104
+ ### Model
105
+
106
+ - **Architecture:** Transformer decoder
107
+ - **Pretraining tokens:** 11T
108
+ - **Precision:** bfloat16
109
+
110
+ ### Hardware
111
+
112
+ - **GPUs:** 256 H100
113
+
114
+ ### Software
115
+
116
+ - **Training Framework:** [nanotron](https://github.com/huggingface/nanotron/tree/main)
117
+
118
+ ## License
119
+
120
+ [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
121
+
122
+ ## Citation
123
+ ```bash
124
+ @misc{allal2024SmolLM2,
125
+ title={SmolLM2 - with great data, comes great performance},
126
+ author={Loubna Ben Allal and Anton Lozhkov and Elie Bakouch and Gabriel Martín Blázquez and Lewis Tunstall and Agustín Piqueres and Andres Marafioti and Cyril Zakka and Leandro von Werra and Thomas Wolf},
127
+ year={2024},
128
+ }
129
+ ```
config.json ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "/fsx/elie_bakouch/nanotron-ckpt/360M-50B-8k-130k-rope-end/hf",
3
+ "architectures": [
4
+ "LlamaForCausalLM"
5
+ ],
6
+ "attention_bias": false,
7
+ "attention_dropout": 0.0,
8
+ "bos_token_id": 0,
9
+ "eos_token_id": 0,
10
+ "hidden_act": "silu",
11
+ "hidden_size": 2048,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 8192,
14
+ "max_position_embeddings": 8192,
15
+ "model_type": "llama",
16
+ "num_attention_heads": 32,
17
+ "num_hidden_layers": 24,
18
+ "num_key_value_heads": 32,
19
+ "pretraining_tp": 1,
20
+ "rms_norm_eps": 1e-05,
21
+ "rope_scaling": null,
22
+ "rope_theta": 130000,
23
+ "tie_word_embeddings": true,
24
+ "torch_dtype": "bfloat16",
25
+ "transformers_version": "4.40.1",
26
+ "use_cache": true,
27
+ "vocab_size": 49152
28
+ }
flax_model.msgpack ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:46dc0494fa27c368c41b021666c868570469993386a093bca18aff3c1e13b065
3
+ size 307852839
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 0,
4
+ "eos_token_id": 0,
5
+ "transformers_version": "4.40.1"
6
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1193528982f4ac0c0b707ce36fd7dc03a0ef6f3e1a432deb886dce2e90c300c0
3
+ size 3422777952
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4a8c0174fa3ee6fe2ffd0f6e21992d4ca4ad1e9b12bd14155b57479e27f56292
3
+ size 307905733
special_tokens_map.json ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|endoftext|>",
4
+ "<|im_start|>",
5
+ "<|im_end|>",
6
+ "<repo_name>",
7
+ "<reponame>",
8
+ "<file_sep>",
9
+ "<filename>",
10
+ "<gh_stars>",
11
+ "<issue_start>",
12
+ "<issue_comment>",
13
+ "<issue_closed>",
14
+ "<jupyter_start>",
15
+ "<jupyter_text>",
16
+ "<jupyter_code>",
17
+ "<jupyter_output>",
18
+ "<jupyter_script>",
19
+ "<empty_output>"
20
+ ],
21
+ "bos_token": {
22
+ "content": "<|endoftext|>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false
27
+ },
28
+ "eos_token": {
29
+ "content": "<|endoftext|>",
30
+ "lstrip": false,
31
+ "normalized": false,
32
+ "rstrip": false,
33
+ "single_word": false
34
+ },
35
+ "unk_token": {
36
+ "content": "<|endoftext|>",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false
41
+ }
42
+ }
tf_model.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9c951cf31decd4a719df82f84d0ad38fca5173b09393b22b864ab8a55ba03d7c
3
+ size 439831352
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
vocab.json ADDED
The diff for this file is too large to render. See raw diff