Update README.md
Browse files
README.md
CHANGED
@@ -22,13 +22,13 @@ The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion to
|
|
22 |
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
|
23 |
|
24 |
#### This Model
|
25 |
-
This is an intermediate checkpoint with
|
26 |
|
27 |
|
28 |
#### How to use
|
29 |
You will need the transformers>=4.31
|
30 |
Do check the [TinyLlama](https://github.com/jzhang38/TinyLlama) github page for more information.
|
31 |
-
```
|
32 |
from transformers import AutoTokenizer
|
33 |
import transformers
|
34 |
import torch
|
|
|
22 |
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
|
23 |
|
24 |
#### This Model
|
25 |
+
This is an intermediate checkpoint with 480K steps and 1007B tokens.
|
26 |
|
27 |
|
28 |
#### How to use
|
29 |
You will need the transformers>=4.31
|
30 |
Do check the [TinyLlama](https://github.com/jzhang38/TinyLlama) github page for more information.
|
31 |
+
```python
|
32 |
from transformers import AutoTokenizer
|
33 |
import transformers
|
34 |
import torch
|