Upload 9 files
Browse files- README.md +2 -32
- gitattributes +0 -1
- output.safetensors +2 -2
README.md
CHANGED
@@ -21,35 +21,5 @@ The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion to
|
|
21 |
|
22 |
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
|
23 |
|
24 |
-
#### This
|
25 |
-
This
|
26 |
-
|
27 |
-
|
28 |
-
#### How to use
|
29 |
-
You will need the transformers>=4.31
|
30 |
-
Do check the [TinyLlama](https://github.com/jzhang38/TinyLlama) github page for more information.
|
31 |
-
```
|
32 |
-
from transformers import AutoTokenizer
|
33 |
-
import transformers
|
34 |
-
import torch
|
35 |
-
model = "PY007/TinyLlama-1.1B-intermediate-step-240k-503b"
|
36 |
-
tokenizer = AutoTokenizer.from_pretrained(model)
|
37 |
-
pipeline = transformers.pipeline(
|
38 |
-
"text-generation",
|
39 |
-
model=model,
|
40 |
-
torch_dtype=torch.float16,
|
41 |
-
device_map="auto",
|
42 |
-
)
|
43 |
-
|
44 |
-
sequences = pipeline(
|
45 |
-
'The TinyLlama project aims to pretrain a 1.1B Llama model on 3 trillion tokens. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀. The training has started on 2023-09-01.',
|
46 |
-
do_sample=True,
|
47 |
-
top_k=10,
|
48 |
-
num_return_sequences=1,
|
49 |
-
repetition_penalty=1.5,
|
50 |
-
eos_token_id=tokenizer.eos_token_id,
|
51 |
-
max_length=500,
|
52 |
-
)
|
53 |
-
for seq in sequences:
|
54 |
-
print(f"Result: {seq['generated_text']}")
|
55 |
-
```
|
|
|
21 |
|
22 |
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
|
23 |
|
24 |
+
#### This Collection
|
25 |
+
This collection contains all checkpoints after the 1T fix. Branch name indicates the step and number of tokens seen.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
gitattributes
CHANGED
@@ -33,4 +33,3 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
-
TinyLlama_logo.png filter=lfs diff=lfs merge=lfs -text
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
output.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:812894c86a7f93c9f60d229dbd346a8111c0424fdd143ebba299d2c3d73c2de9
|
3 |
+
size 669711964
|