chaoscodes
commited on
Commit
•
a4cfa30
1
Parent(s):
df87ba8
Update README.md
Browse files
README.md
CHANGED
@@ -20,7 +20,7 @@ https://github.com/jzhang38/TinyLlama
|
|
20 |
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
|
21 |
|
22 |
#### This Model
|
23 |
-
In this repo, we release our TinyLlama training only with 2T tokens on SlimPajama dataset. (~3 epochs)
|
24 |
|
25 |
#### How to use
|
26 |
You will need the transformers>=4.31
|
|
|
20 |
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
|
21 |
|
22 |
#### This Model
|
23 |
+
In this repo, we release our TinyLlama-v2 training only with 2T tokens on SlimPajama dataset. (~3 epochs)
|
24 |
|
25 |
#### How to use
|
26 |
You will need the transformers>=4.31
|