chaoscodes
commited on
Commit
•
afff329
1
Parent(s):
bab5667
Update README.md
Browse files
README.md
CHANGED
@@ -19,10 +19,36 @@ https://github.com/jzhang38/TinyLlama
|
|
19 |
|
20 |
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
|
21 |
|
22 |
-
|
23 |
-
Due to these issues([bug1](https://whimsical-aphid-86d.notion.site/Release-of-TinyLlama-1-5T-Checkpoints-Postponed-01b266998c1c47f78f5ae1520196d194?pvs=4), [bug2](https://whimsical-aphid-86d.notion.site/2023-12-18-Updates-from-TinyLlama-Team-7d30c01fff794da28ccc952f327c8d4f)). We retrain our TinyLlama
|
24 |
|
25 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
26 |
You will need the transformers>=4.31
|
27 |
Do check the [TinyLlama](https://github.com/jzhang38/TinyLlama) github page for more information.
|
28 |
```
|
@@ -51,9 +77,9 @@ for seq in sequences:
|
|
51 |
print(f"Result: {seq['generated_text']}")
|
52 |
```
|
53 |
|
54 |
-
|
55 |
-
| Model | Pretrain Tokens | HellaSwag | Obqa
|
56 |
-
|
57 |
-
| Pythia-1.0B |
|
58 |
-
| TinyLlama-1.1B-intermediate-step-1431k-3T
|
59 |
-
| TinyLlama-1.1B-v2
|
|
|
19 |
|
20 |
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
|
21 |
|
22 |
+
### Pretraining
|
23 |
+
Due to these issues([bug1](https://whimsical-aphid-86d.notion.site/Release-of-TinyLlama-1-5T-Checkpoints-Postponed-01b266998c1c47f78f5ae1520196d194?pvs=4), [bug2](https://whimsical-aphid-86d.notion.site/2023-12-18-Updates-from-TinyLlama-Team-7d30c01fff794da28ccc952f327c8d4f)). We try to retrain our TinyLlama to provide a better model. We train our model with 2T tokens and divided our pretraining into 3 different stages: 1) basic pretraining, 2) continual pretraining with specific domain, and 3) cooldown .
|
24 |
|
25 |
+
|
26 |
+
|
27 |
+
#### Basic pretraining
|
28 |
+
|
29 |
+
In this initial phase, we manage to train our model with language-only corpus (slimpajama) to develop its commonsense reasoning capabilities. The model was trained with 1.5T tokens during this basic pretraining period. Due to memory constraints, we set the batch size to approximately 1.8M.
|
30 |
+
|
31 |
+
#### Continual pretraining with specific domain
|
32 |
+
|
33 |
+
We incorporated 3 different kinds of corpus during this pretraining, slimpajama (which is the same as the first phase), Code&Math (starcoder and proof pile), and Chinese (Skypile). This approach allowed us to develop three variant models with specialized capabilities.
|
34 |
+
|
35 |
+
At the begining ~6B tokens in this stage, we linearly increased the sampling proportion for the domain-specific corpus (excluding Slimpajama, as it remained unchanged compared with stage 1). This warmup sampling increasing strategy was designed to gradually adjust the distribution of the pretraining data, ensuring a more stable training process. After this sampling increasing stage, we continued pretraining the model with stable sampling strategy until reaching ~1.85T tokens.
|
36 |
+
|
37 |
+
#### Cooldown
|
38 |
+
|
39 |
+
Implementing a cooldown phase has become a crucial technique to achieve better model convergence at the end of pretraining. However, since we have already use cosine learning rate strategy at the beginning, it becomes challenging to alter the learning rate for cooldown like what MiniCPM or deepseek does. Therefore, we try to cool down with adjusting our batch size. Specifically, we increase our batch size from 1.8M to 7.2M while keeping the original cosine learning rate schedule during our cooldown stage.
|
40 |
+
|
41 |
+
#### Tinyllama model family
|
42 |
+
|
43 |
+
Following an extensive and detailed pretraining process. We are now releasing three specialized versions of our model:
|
44 |
+
|
45 |
+
1. **TinyLlama_v2**: The standard version, used for general purposes.
|
46 |
+
2. **TinyLlama_v2_math_code**: Equipped with better ability for math and code.
|
47 |
+
3. **TinyLlama_v2_chinese**: Good understanding capacity for Chinese language.
|
48 |
+
|
49 |
+
|
50 |
+
|
51 |
+
### How to use
|
52 |
You will need the transformers>=4.31
|
53 |
Do check the [TinyLlama](https://github.com/jzhang38/TinyLlama) github page for more information.
|
54 |
```
|
|
|
77 |
print(f"Result: {seq['generated_text']}")
|
78 |
```
|
79 |
|
80 |
+
### Eval
|
81 |
+
| Model | Pretrain Tokens | HellaSwag | Obqa | WinoGrande | ARC_c | ARC_e | boolq | piqa | avg |
|
82 |
+
| ----------------------------------------- | --------------- | --------- | --------- | ---------- | --------- | --------- | ----- | --------- | --------- |
|
83 |
+
| Pythia-1.0B | 300B | 47.16 | 31.40 | 53.43 | 27.05 | 48.99 | 60.83 | 69.21 | 48.30 |
|
84 |
+
| TinyLlama-1.1B-intermediate-step-1431k-3T | 3T | 59.20 | 36.00 | 59.12 | 30.12 | 55.25 | 57.83 | 73.29 | 52.99 |
|
85 |
+
| TinyLlama-1.1B-v2 | 2T | **61.47** | **36.80** | **59.43** | **32.68** | **55.47** | 55.99 | **73.56** | **53.63** |
|