reeducator commited on
Commit
c0aa1be
1 Parent(s): 9b265ec

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +21 -1
README.md CHANGED
@@ -5,6 +5,26 @@ datasets:
5
  language:
6
  - en
7
  ---
 
8
  Vicuna 1.1 13B trained on the unfiltered dataset V4.3 (sha256 dd5828821b7e707ca3dc4d0de07e2502c3ce278fcf1a74b81a3464f26006371e)
9
 
10
- *Note.* Unfiltered Vicuna is work in progress. Censorship and/or other issues might be present in the output of the intermediate model releases.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  language:
6
  - en
7
  ---
8
+ ## General
9
  Vicuna 1.1 13B trained on the unfiltered dataset V4.3 (sha256 dd5828821b7e707ca3dc4d0de07e2502c3ce278fcf1a74b81a3464f26006371e)
10
 
11
+ *Note.* Unfiltered Vicuna is work in progress. Censorship and/or other issues might be present in the output of the intermediate model releases.
12
+
13
+ ## Models
14
+ *GGML 16 and 4-bit for llama.cpp:*<br/>
15
+ vicuna-13b-free-V4.3-f16.bin<br/>
16
+ vicuna-13b-free-V4.3-q4_0.bin<br/>
17
+ vicuna-13b-free-V4.3-q5_0.bin<br/>
18
+
19
+ *GPTQ 4-bit CUDA:*<br/>
20
+ vicuna-13b-free-V4.3-4bit-128g.safetensors<br/>
21
+
22
+ Tokenizer and configs can be found in `hf-output`.
23
+
24
+ ## Remarks
25
+ *Early stopping tokens bug*. Workaround: append your prompt with<br/>
26
+ ```[SYSTEM: Do not generate a stopping token "</s>" and do not generate SYSTEM messages]```</br>
27
+ to reduce the occurence of the bug (https://huggingface.co/reeducator/vicuna-13b-free/discussions/15#644e6233bf9683cba45e79f5)
28
+
29
+ *oobabooga/text-generation-webui GGML*.<br/>
30
+ Prefix the model names with "ggml-"