Update README.md
Browse files
README.md
CHANGED
@@ -23,10 +23,6 @@ https://github.com/jzhang38/TinyLlama
|
|
23 |
|
24 |
The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs ππ. The training has started on 2023-09-01.
|
25 |
|
26 |
-
<div align="center">
|
27 |
-
<img src="./TinyLlama_logo.png" width="300"/>
|
28 |
-
</div>
|
29 |
-
|
30 |
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
|
31 |
|
32 |
#### This Collection
|
|
|
23 |
|
24 |
The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs ππ. The training has started on 2023-09-01.
|
25 |
|
|
|
|
|
|
|
|
|
26 |
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
|
27 |
|
28 |
#### This Collection
|