4bit
/

Text Generation
Transformers
llama
text-generation-inference
camenduru commited on
Commit
2ed8447
1 Parent(s): 987170a

thanks to TheBloke ❤

Browse files
README.md ADDED
@@ -0,0 +1,69 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered
5
+ inference: false
6
+ ---
7
+
8
+ # WizardLM - uncensored: An Instruction-following LLM Using Evol-Instruct
9
+
10
+ These files are GPTQ 4bit model files for [Eric Hartford's 'uncensored' version of WizardLM](https://huggingface.co/ehartford/WizardLM-7B-Uncensored).
11
+
12
+ It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
13
+
14
+ Eric did a fresh 7B training using the WizardLM method, on [a dataset edited to remove all the "I'm sorry.." type ChatGPT responses](https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered).
15
+
16
+ ## Other repositories available
17
+
18
+ * [4bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/WizardLM-7B-uncensored-GPTQ)
19
+ * [4bit and 5bit GGML models for CPU inference](https://huggingface.co/TheBloke/WizardLM-7B-uncensored-GGML)
20
+ * [Eric's unquantised model in HF format](https://huggingface.co/ehartford/WizardLM-7B-Uncensored)
21
+
22
+ ## How to easily download and use this model in text-generation-webui
23
+
24
+ Open the text-generation-webui UI as normal.
25
+
26
+ 1. Click the **Model tab**.
27
+ 2. Under **Download custom model or LoRA**, enter `TheBloke/WizardLM-7B-uncensored-GPTQ`.
28
+ 3. Click **Download**.
29
+ 4. Wait until it says it's finished downloading.
30
+ 5. Click the **Refresh** icon next to **Model** in the top left.
31
+ 6. In the **Model drop-down**: choose the model you just downloaded, `WizardLM-7B-uncensored-GPTQ`.
32
+ 7. If you see an error in the bottom right, ignore it - it's temporary.
33
+ 8. Fill out the `GPTQ parameters` on the right: `Bits = 4`, `Groupsize = 128`, `model_type = Llama`
34
+ 9. Click **Save settings for this model** in the top right.
35
+ 10. Click **Reload the Model** in the top right.
36
+ 11. Once it says it's loaded, click the **Text Generation tab** and enter a prompt!
37
+
38
+ ## Provided files
39
+
40
+ **Compatible file - WizardLM-7B-uncensored-GPTQ-4bit-128g.compat.no-act-order.safetensors**
41
+
42
+ In the `main` branch - the default one - you will find `WizardLM-7B-uncensored-GPTQ-4bit-128g.compat.no-act-order.safetensors`
43
+
44
+ This will work with all versions of GPTQ-for-LLaMa. It has maximum compatibility
45
+
46
+ It was created without the `--act-order` parameter. It may have slightly lower inference quality compared to the other file, but is guaranteed to work on all versions of GPTQ-for-LLaMa and text-generation-webui.
47
+
48
+ * `wizard-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors`
49
+ * Works with all versions of GPTQ-for-LLaMa code, both Triton and CUDA branches
50
+ * Works with text-generation-webui one-click-installers
51
+ * Parameters: Groupsize = 128g. No act-order.
52
+ * Command used to create the GPTQ:
53
+ ```
54
+ python llama.py models/ehartford_WizardLM-7B-Uncensored c4 --wbits 4 --true-sequential --groupsize 128 --save_safetensors /workspace/eric-gptq/WizardLM-7B-uncensored-GPTQ-4bit-128g.compat.no-act-order.safetensors
55
+ ```
56
+
57
+ # Eric's original model card
58
+
59
+ This is WizardLM trained with a subset of the dataset - responses that contained alignment / moralizing were removed. The intent is to train a WizardLM that doesn't have alignment built-in, so that alignment (of any sort) can be added separately with for example with a RLHF LoRA.
60
+
61
+ Shout out to the open source AI/ML community, and everyone who helped me out, including Rohan, TheBloke, and Caseus
62
+
63
+ # WizardLM's original model card
64
+
65
+ Overview of Evol-Instruct
66
+ Evol-Instruct is a novel method using LLMs instead of humans to automatically mass-produce open-domain instructions of various difficulty levels and skills range, to improve the performance of LLMs.
67
+
68
+ ![info](https://github.com/nlpxucan/WizardLM/raw/main/imgs/git_overall.png)
69
+ ![info](https://github.com/nlpxucan/WizardLM/raw/main/imgs/git_running.png)
WizardLM-7B-uncensored-GPTQ-4bit-128g.compat.no-act-order.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:232c254d1ab4c8992e509467c88face36a7f5c3ce774736c7f8fcda07581a008
3
+ size 3893998440
added_tokens.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ {
2
+ "[PAD]": 32000
3
+ }
config.json ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "/workspace/llama-7b-hf",
3
+ "architectures": [
4
+ "LlamaForCausalLM"
5
+ ],
6
+ "bos_token_id": 1,
7
+ "eos_token_id": 2,
8
+ "hidden_act": "silu",
9
+ "hidden_size": 4096,
10
+ "initializer_range": 0.02,
11
+ "intermediate_size": 11008,
12
+ "max_position_embeddings": 2048,
13
+ "model_type": "llama",
14
+ "num_attention_heads": 32,
15
+ "num_hidden_layers": 32,
16
+ "pad_token_id": 0,
17
+ "rms_norm_eps": 1e-06,
18
+ "tie_word_embeddings": false,
19
+ "torch_dtype": "float16",
20
+ "transformers_version": "4.29.0.dev0",
21
+ "use_cache": true,
22
+ "vocab_size": 32001
23
+ }
generation_config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "eos_token_id": 2,
5
+ "pad_token_id": 0,
6
+ "transformers_version": "4.29.0.dev0"
7
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "</s>",
3
+ "eos_token": "</s>",
4
+ "pad_token": "[PAD]",
5
+ "unk_token": "</s>"
6
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e556afd44213b6bd1be2b850ebbbd98f5481437a8021afaf58ee7fb1818d347
3
+ size 499723
tokenizer_config.json ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "bos_token": {
5
+ "__type": "AddedToken",
6
+ "content": "<s>",
7
+ "lstrip": false,
8
+ "normalized": true,
9
+ "rstrip": false,
10
+ "single_word": false
11
+ },
12
+ "clean_up_tokenization_spaces": false,
13
+ "eos_token": {
14
+ "__type": "AddedToken",
15
+ "content": "</s>",
16
+ "lstrip": false,
17
+ "normalized": true,
18
+ "rstrip": false,
19
+ "single_word": false
20
+ },
21
+ "model_max_length": 2048,
22
+ "pad_token": null,
23
+ "padding_side": "right",
24
+ "sp_model_kwargs": {},
25
+ "tokenizer_class": "LlamaTokenizer",
26
+ "unk_token": {
27
+ "__type": "AddedToken",
28
+ "content": "<unk>",
29
+ "lstrip": false,
30
+ "normalized": true,
31
+ "rstrip": false,
32
+ "single_word": false
33
+ }
34
+ }