TheBloke commited on
Commit
77514c7
1 Parent(s): a92f813

Updating model files

Browse files
Files changed (1) hide show
  1. README.md +24 -2
README.md CHANGED
@@ -7,6 +7,17 @@ language:
7
  tags:
8
  - uncensored
9
  ---
 
 
 
 
 
 
 
 
 
 
 
10
 
11
  # Wizard-Vicuna-7B-Uncensored GPTQ
12
 
@@ -56,15 +67,26 @@ It was created without the `--act-order` parameter. It may have slightly lower i
56
  python llama.py ehartford_Wizard-Vicuna-7B-Uncensored wikitext2 --wbits 4 --groupsize 128 --true-sequential --save_safetensors Wizard-Vicuna-7B-Uncensored-GPTQ-4bit-128g.compat.no-act-order.safetensors
57
  ```
58
 
 
 
 
 
 
 
 
 
 
 
 
59
  # Original model card
60
 
61
  This is [wizard-vicuna-13b](https://huggingface.co/junelee/wizard-vicuna-13b) trained against LLaMA-7B with a subset of the dataset - responses that contained alignment / moralizing were removed. The intent is to train a WizardLM that doesn't have alignment built-in, so that alignment (of any sort) can be added separately with for example with a RLHF LoRA.
62
 
63
  Shout out to the open source AI/ML community, and everyone who helped me out.
64
 
65
- Note:
66
 
67
- An uncensored model has no guardrails.
68
 
69
  You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous object such as a knife, gun, lighter, or car.
70
 
 
7
  tags:
8
  - uncensored
9
  ---
10
+ <div style="width: 100%;">
11
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
12
+ </div>
13
+ <div style="display: flex; justify-content: space-between; width: 100%;">
14
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
15
+ <p><a href="https://discord.gg/UBgz4VXf">Chat & support: my new Discord server</a></p>
16
+ </div>
17
+ <div style="display: flex; flex-direction: column; align-items: flex-end;">
18
+ <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? Patreon coming soon!</a></p>
19
+ </div>
20
+ </div>
21
 
22
  # Wizard-Vicuna-7B-Uncensored GPTQ
23
 
 
67
  python llama.py ehartford_Wizard-Vicuna-7B-Uncensored wikitext2 --wbits 4 --groupsize 128 --true-sequential --save_safetensors Wizard-Vicuna-7B-Uncensored-GPTQ-4bit-128g.compat.no-act-order.safetensors
68
  ```
69
 
70
+ ## Want to support my work?
71
+
72
+ I've had a lot of people ask if they can contribute. I love providing models and helping people, but it is starting to rack up pretty big cloud computing bills.
73
+
74
+ So if you're able and willing to contribute, it'd be most gratefully received and will help me to keep providing models, and work on various AI projects.
75
+
76
+ Donaters will get priority support on any and all AI/LLM/model questions, and I'll gladly quantise any model you'd like to try.
77
+
78
+ * Patreon: coming soon! (just awaiting approval)
79
+ * Ko-Fi: https://ko-fi.com/TheBlokeAI
80
+ * Discord: https://discord.gg/UBgz4VXf
81
  # Original model card
82
 
83
  This is [wizard-vicuna-13b](https://huggingface.co/junelee/wizard-vicuna-13b) trained against LLaMA-7B with a subset of the dataset - responses that contained alignment / moralizing were removed. The intent is to train a WizardLM that doesn't have alignment built-in, so that alignment (of any sort) can be added separately with for example with a RLHF LoRA.
84
 
85
  Shout out to the open source AI/ML community, and everyone who helped me out.
86
 
87
+ Note:
88
 
89
+ An uncensored model has no guardrails.
90
 
91
  You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous object such as a knife, gun, lighter, or car.
92