Transformers
English
Inference Endpoints
jncraton commited on
Commit
e71d673
1 Parent(s): 76e1c2b

Upload folder using huggingface_hub

Browse files
README.md ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - cerebras/SlimPajama-627B
5
+ - bigcode/starcoderdata
6
+ - timdettmers/openassistant-guanaco
7
+ language:
8
+ - en
9
+ ---
10
+ <div align="center">
11
+
12
+ # TinyLlama-1.1B
13
+ </div>
14
+
15
+ https://github.com/jzhang38/TinyLlama
16
+
17
+ The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀. The training has started on 2023-09-01.
18
+
19
+ <div align="center">
20
+ <img src="./TinyLlama_logo.png" width="300"/>
21
+ </div>
22
+
23
+ We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
24
+
25
+ #### This Model
26
+ This is the chat model finetuned on [PY007/TinyLlama-1.1B-intermediate-step-240k-503b](https://huggingface.co/PY007/TinyLlama-1.1B-intermediate-step-240k-503b). The dataset used is [openassistant-guananco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco).
27
+
28
+ #### How to use
29
+ You will need the transformers>=4.31
30
+ Do check the [TinyLlama](https://github.com/jzhang38/TinyLlama) github page for more information.
31
+ ```python
32
+ from transformers import AutoTokenizer
33
+ import transformers
34
+ import torch
35
+ model = "PY007/TinyLlama-1.1B-Chat-v0.1"
36
+ tokenizer = AutoTokenizer.from_pretrained(model)
37
+ pipeline = transformers.pipeline(
38
+ "text-generation",
39
+ model=model,
40
+ torch_dtype=torch.float16,
41
+ device_map="auto",
42
+ )
43
+
44
+ prompt = "What are the values in open source projects?"
45
+ formatted_prompt = (
46
+ f"### Human: {prompt}### Assistant:"
47
+ )
48
+
49
+
50
+ sequences = pipeline(
51
+ formatted_prompt,
52
+ do_sample=True,
53
+ top_k=50,
54
+ top_p = 0.7,
55
+ num_return_sequences=1,
56
+ repetition_penalty=1.1,
57
+ max_new_tokens=500,
58
+ )
59
+ for seq in sequences:
60
+ print(f"Result: {seq['generated_text']}")
61
+ ```
config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "<s>",
3
+ "eos_token": "</s>",
4
+ "layer_norm_epsilon": 1e-05,
5
+ "unk_token": "<unk>"
6
+ }
model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:71b5ffae759f0a4f0d85b5ff20bdc36fbd14167fa439da58af4e597f9d05f8bc
3
+ size 1102182891
special_tokens_map.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "</s>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": "[PAD]",
17
+ "unk_token": {
18
+ "content": "<unk>",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ }
24
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "__type": "AddedToken",
4
+ "content": "<s>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false
9
+ },
10
+ "clean_up_tokenization_spaces": false,
11
+ "eos_token": {
12
+ "__type": "AddedToken",
13
+ "content": "</s>",
14
+ "lstrip": false,
15
+ "normalized": false,
16
+ "rstrip": false,
17
+ "single_word": false
18
+ },
19
+ "legacy": false,
20
+ "model_max_length": 1000000000000000019884624838656,
21
+ "pad_token": null,
22
+ "padding_side": "right",
23
+ "sp_model_kwargs": {},
24
+ "tokenizer_class": "LlamaTokenizer",
25
+ "unk_token": {
26
+ "__type": "AddedToken",
27
+ "content": "<unk>",
28
+ "lstrip": false,
29
+ "normalized": false,
30
+ "rstrip": false,
31
+ "single_word": false
32
+ },
33
+ "use_default_system_prompt": true
34
+ }
vocabulary.json ADDED
The diff for this file is too large to render. See raw diff