RonanMcGovern commited on
Commit
4ad69ba
β€’
1 Parent(s): 35531cd

fix base repo reference

Browse files
Files changed (1) hide show
  1. README.md +72 -0
README.md ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - cerebras/SlimPajama-627B
5
+ - bigcode/starcoderdata
6
+ - OpenAssistant/oasst_top1_2023-08-25
7
+ - Trelis/openassistant-llama-style
8
+ language:
9
+ - en
10
+ tags:
11
+ - chat
12
+ - tinyllama
13
+ ---
14
+ # TinyLlama-1.1B Chat (1 Trillion token checkpoint)
15
+
16
+ The prompt format is:
17
+ ```
18
+ f"[INST] {prompt} [INST]"
19
+ ```
20
+ just like Llama 2 base models.
21
+
22
+ Note that this model has trouble being succinct and does not emit the end of sequence (< /s >) token well.
23
+
24
+ The model was fine tuned using an adapted filtered Openassistant dataset [here](https://huggingface.co/datasets/Trelis/openassistant-llama-style).
25
+
26
+ The base repo follows here:
27
+
28
+ # TinyLlama-1.1B
29
+ </div>
30
+
31
+ https://github.com/jzhang38/TinyLlama
32
+
33
+ The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs πŸš€πŸš€. The training has started on 2023-09-01.
34
+
35
+ <div align="center">
36
+ <img src="./TinyLlama_logo.png" width="300"/>
37
+ </div>
38
+
39
+ We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
40
+
41
+ #### This Model
42
+ This is an intermediate checkpoint with 480K steps and 1007B tokens.
43
+
44
+
45
+ #### How to use
46
+ You will need the transformers>=4.31
47
+ Do check the [TinyLlama](https://github.com/jzhang38/TinyLlama) github page for more information.
48
+ ```python
49
+ from transformers import AutoTokenizer
50
+ import transformers
51
+ import torch
52
+ model = "PY007/TinyLlama-1.1B-intermediate-step-240k-503b"
53
+ tokenizer = AutoTokenizer.from_pretrained(model)
54
+ pipeline = transformers.pipeline(
55
+ "text-generation",
56
+ model=model,
57
+ torch_dtype=torch.float16,
58
+ device_map="auto",
59
+ )
60
+
61
+ sequences = pipeline(
62
+ 'The TinyLlama project aims to pretrain a 1.1B Llama model on 3 trillion tokens. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs πŸš€πŸš€. The training has started on 2023-09-01.',
63
+ do_sample=True,
64
+ top_k=10,
65
+ num_return_sequences=1,
66
+ repetition_penalty=1.5,
67
+ eos_token_id=tokenizer.eos_token_id,
68
+ max_length=500,
69
+ )
70
+ for seq in sequences:
71
+ print(f"Result: {seq['generated_text']}")
72
+ ```