LoneStriker commited on
Commit
495adea
1 Parent(s): 4548211

2.4-bit Exllama v2 quant of airoboros-gpt4-1.4.1 70B model

Browse files
README.md CHANGED
@@ -1,3 +1,50 @@
1
  ---
2
  license: other
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: other
3
+ datasets:
4
+ - jondurbin/airoboros-gpt4-1.4.1
5
  ---
6
+ ### 2.4-bit Exllama v2 quant of airoboros-gpt4-1.4.1
7
+
8
+ Simple quantization of original model. This model should fit on a single 24 GB VRAM GPU where Exlalama v2 is supported.
9
+ Should also support full 4096 context on a single GPU, without dsktop apps also running on the same GPU. Ideally,
10
+ the GPU would be completely empty of any desktop or apps.
11
+
12
+ ### Overview
13
+
14
+ Llama 2 70b fine tune using https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.4.1
15
+
16
+ See the previous llama 65b model card for info:
17
+ https://hf.co/jondurbin/airoboros-65b-gpt4-1.4
18
+
19
+ ### Contribute
20
+
21
+ If you're interested in new functionality, particularly a new "instructor" type to generate a specific type of training data,
22
+ take a look at the dataset generation tool repo: https://github.com/jondurbin/airoboros and either make a PR or open an issue with details.
23
+
24
+ To help me with the OpenAI/compute costs:
25
+
26
+ - https://bmc.link/jondurbin
27
+ - ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11
28
+ - BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf
29
+
30
+ ### Licence and usage restrictions
31
+
32
+ Base model has a custom Meta license:
33
+ - See the [meta-license/LICENSE.txt](meta-license/LICENSE.txt) file attached for the original license provided by Meta.
34
+ - See also [meta-license/USE_POLICY.md](meta-license/USE_POLICY.md) and [meta-license/Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf), also provided by Meta.
35
+
36
+ The fine-tuning data was generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros)
37
+
38
+ The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI
39
+
40
+ - what does *compete* actually mean here?
41
+ - these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
42
+ - if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
43
+ - the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place
44
+ - other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
45
+
46
+ I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly.
47
+
48
+ Your best bet is probably to avoid using this commercially due to the OpenAI API usage.
49
+
50
+ Either way, by using this model, you agree to completely indemnify me.
config.json ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "airoboros-l2-70b-gpt4-1.4.1-2.4bpw-h6-elx2",
3
+ "architectures": [
4
+ "LlamaForCausalLM"
5
+ ],
6
+ "bos_token_id": 1,
7
+ "eos_token_id": 2,
8
+ "hidden_act": "silu",
9
+ "hidden_size": 8192,
10
+ "initializer_range": 0.02,
11
+ "intermediate_size": 28672,
12
+ "max_position_embeddings": 4096,
13
+ "model_type": "llama",
14
+ "num_attention_heads": 64,
15
+ "num_hidden_layers": 80,
16
+ "num_key_value_heads": 8,
17
+ "pad_token_id": 0,
18
+ "pretraining_tp": 1,
19
+ "rms_norm_eps": 1e-05,
20
+ "rope_scaling": null,
21
+ "tie_word_embeddings": false,
22
+ "torch_dtype": "float16",
23
+ "transformers_version": "4.32.0.dev0",
24
+ "use_cache": true,
25
+ "vocab_size": 32000
26
+ }
output-00001-of-00003.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a8dcfcff4a99284dd8ea4ed1349a18fdffac0b31489b8a40f9e9bc71c1c96af4
3
+ size 8563868088
output-00002-of-00003.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b1131f2e3e4cb6f645f1a148271f1e836cb9ed841d6956085bbf4e95def0106b
3
+ size 8572463208
output-00003-of-00003.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dba754cdbd6e1d4f5f36b05233508b53c5af679a9fcd513278b27442f6b04c69
3
+ size 4156160000
special_tokens_map.json ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "</s>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "unk_token": {
17
+ "content": "<unk>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ }
23
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e556afd44213b6bd1be2b850ebbbd98f5481437a8021afaf58ee7fb1818d347
3
+ size 499723
tokenizer_config.json ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "__type": "AddedToken",
4
+ "content": "<s>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false
9
+ },
10
+ "clean_up_tokenization_spaces": false,
11
+ "eos_token": {
12
+ "__type": "AddedToken",
13
+ "content": "</s>",
14
+ "lstrip": false,
15
+ "normalized": false,
16
+ "rstrip": false,
17
+ "single_word": false
18
+ },
19
+ "legacy": false,
20
+ "model_max_length": 1000000000000000019884624838656,
21
+ "pad_token": null,
22
+ "sp_model_kwargs": {},
23
+ "tokenizer_class": "LlamaTokenizer",
24
+ "unk_token": {
25
+ "__type": "AddedToken",
26
+ "content": "<unk>",
27
+ "lstrip": false,
28
+ "normalized": false,
29
+ "rstrip": false,
30
+ "single_word": false
31
+ }
32
+ }