blackmount8 commited on
Commit
1bf3de1
1 Parent(s): 82c3c99

Upload model and tokenizer.

Browse files
README.md CHANGED
@@ -1,3 +1,58 @@
1
  ---
2
  license: cc
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc
3
+ datasets:
4
+ - VMware/open-instruct-v1-oasst-dolly-hhrlhf
5
+ language:
6
+ - en
7
+ library_name: transformers
8
+ pipeline_tag: text-generation
9
  ---
10
+ # blackmount8/open-llama-7B-open-instruct-ct2-float16
11
+
12
+ Float16 version of [VMware/open-llama-7b-open-instruct](https://huggingface.co/VMware/open-llama-7b-open-instruct), quantized using CTranslate2.
13
+
14
+ ## VMware/open-llama-7B-open-instruct
15
+
16
+ Instruction-tuned version of the fully trained Open LLama 7B model. The model is open for `<b>`COMMERCIAL USE `</b>`. `<br>`
17
+
18
+ `<b>` NOTE `</b>` : The model was trained using the Alpaca prompt template
19
+ `<b>` NOTE `</b>` : Fast tokenizer results in incorrect encoding, set the ``use_fast = False`` parameter, when instantiating the tokenizer
20
+
21
+ ## License
22
+
23
+ - `<b>`Commercially Viable `</b>`
24
+ - Instruction dataset, [VMware/open-instruct-v1-oasst-dolly-hhrlhf](https://huggingface.co/datasets/VMware/open-instruct-v1-oasst-dolly-hhrlhf) is under cc-by-sa-3.0
25
+ - Language Model, ([openlm-research/open_llama_7b](https://huggingface.co/openlm-research/open_llama_7b)) is under apache-2.0
26
+
27
+ ## Nomenclature
28
+
29
+ - Model : Open-llama
30
+ - Model Size: 7B parameters
31
+ - Dataset: Open-instruct-v1 (oasst, dolly, hhrlhf)
32
+
33
+ ## Use in CTranslate2
34
+
35
+ ```
36
+ import ctranslate2
37
+ from transformers import AutoTokenizer
38
+
39
+ model_name = "./open-llama-7b-open-instruct-ct2-float16"
40
+
41
+ tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False, padding_side="left", truncation_side="left")
42
+ model = ctranslate2.Generator(model_name, device="cuda", compute_type="float16")
43
+
44
+ input_text = ["What is the meaning of stonehenge?", "Hello mate!"]
45
+
46
+ input_ids = tokenizer(input_text, return_tensors="pt", padding=True, truncation=True).input_ids
47
+ input_tokens = [tokenizer.convert_ids_to_tokens(ele) for ele in input_ids]
48
+
49
+ outputs = model.generate_batch(input_tokens, max_length=128)
50
+
51
+ output_tokens = [
52
+ ele.sequences_ids[0] for ele in outputs
53
+ ]
54
+
55
+ output = tokenizer.batch_decode(output_tokens)
56
+
57
+ print(output)
58
+ ```
config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "<s>",
3
+ "eos_token": "</s>",
4
+ "layer_norm_epsilon": null,
5
+ "unk_token": "<unk>"
6
+ }
model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4718f6fa41ace103e406b27aa197500ea037a79329845bbdb5da747e14b2b41f
3
+ size 13476848176
special_tokens_map.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": true,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "</s>",
11
+ "lstrip": false,
12
+ "normalized": true,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": "<unk>",
17
+ "unk_token": {
18
+ "content": "<unk>",
19
+ "lstrip": false,
20
+ "normalized": true,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ }
24
+ }
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ab1b681ec7fc02fed5edd3026687d7a692a918c4dd8e150ca2e3994a6229843b
3
+ size 534194
tokenizer_config.json ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "bos_token": {
5
+ "__type": "AddedToken",
6
+ "content": "<s>",
7
+ "lstrip": false,
8
+ "normalized": true,
9
+ "rstrip": false,
10
+ "single_word": false
11
+ },
12
+ "clean_up_tokenization_spaces": false,
13
+ "eos_token": {
14
+ "__type": "AddedToken",
15
+ "content": "</s>",
16
+ "lstrip": false,
17
+ "normalized": true,
18
+ "rstrip": false,
19
+ "single_word": false
20
+ },
21
+ "model_max_length": 2048,
22
+ "pad_token": null,
23
+ "padding_side": "right",
24
+ "sp_model_kwargs": {},
25
+ "tokenizer_class": "LlamaTokenizer",
26
+ "unk_token": {
27
+ "__type": "AddedToken",
28
+ "content": "<unk>",
29
+ "lstrip": false,
30
+ "normalized": true,
31
+ "rstrip": false,
32
+ "single_word": false
33
+ }
34
+ }
vocabulary.json ADDED
The diff for this file is too large to render. See raw diff