Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,28 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
datasets:
|
4 |
+
- cerebras/SlimPajama-627B
|
5 |
+
- bigcode/starcoderdata
|
6 |
+
- OpenAssistant/oasst_top1_2023-08-25
|
7 |
+
language:
|
8 |
+
- en
|
9 |
+
---
|
10 |
+
<div align="center">
|
11 |
+
|
12 |
+
# TinyLlama-1.1B
|
13 |
+
</div>
|
14 |
+
|
15 |
+
https://github.com/jzhang38/TinyLlama
|
16 |
+
|
17 |
+
The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀. The training has started on 2023-09-01.
|
18 |
+
|
19 |
+
<div align="center">
|
20 |
+
<img src="./TinyLlama_logo.png" width="300"/>
|
21 |
+
</div>
|
22 |
+
|
23 |
+
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
|
24 |
+
|
25 |
+
#### This Model
|
26 |
+
This is the chat model finetuned on [PY007/TinyLlama-1.1B-intermediate-step-240k-503b](https://huggingface.co/PY007/TinyLlama-1.1B-intermediate-step-240k-503b). The dataset used is [OpenAssistant/oasst_top1_2023-08-25](https://huggingface.co/datasets/OpenAssistant/oasst_top1_2023-08-25).
|
27 |
+
|
28 |
+
**Update from V0.1: 1. Different dataset. 2. Different chat format (now [chatml](https://github.com/openai/openai-python/blob/main/chatml.md) formatted conversations).**
|