TheBloke commited on
Commit
b7e8688
1 Parent(s): 3d6ab67

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -8
README.md CHANGED
@@ -9,7 +9,7 @@ license: other
9
  </div>
10
  <div style="display: flex; justify-content: space-between; width: 100%;">
11
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
12
- <p><a href="https://discord.gg/rSU9f2X3">Chat & support: my new Discord server</a></p>
13
  </div>
14
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
15
  <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
@@ -33,7 +33,7 @@ GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/gger
33
  * [4bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Uncensored-GPTQ).
34
  * [4bit and 5bit GGML models for CPU inference](https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Uncensored-GGML).
35
  * [float16 HF format model for GPU inference and further conversions](https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Uncensored-fp16).
36
- *
37
  ## THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 19th 2023 - commit 2d5db48)!
38
 
39
  llama.cpp recently made another breaking change to its quantisation methods - https://github.com/ggerganov/llama.cpp/pull/1508
@@ -72,7 +72,7 @@ Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](http
72
 
73
  For further support, and discussions on these models and AI in general, join us at:
74
 
75
- [TheBloke AI's Discord server](https://discord.gg/UBgz4VXf)
76
 
77
  ## Thanks, and how to contribute.
78
 
@@ -82,14 +82,14 @@ I've had a lot of people ask if they can contribute. I enjoy providing models an
82
 
83
  If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
84
 
85
- Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits
86
 
87
  * Patreon: https://patreon.com/TheBlokeAI
88
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
89
 
90
  **Patreon special mentions**: Aemon Algiz, Dmitriy Samsonov, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, Jonathan Leane, Talal Aujan, V. Lukas, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Sebastain Graf, Johann-Peter Hartman.
91
 
92
- Thank you to all my generous patrons and donaters.
93
  <!-- footer end -->
94
 
95
  # Original model card: Eric Hartford's Wizard Vicuna 30B Uncensored
@@ -98,12 +98,12 @@ This is [wizard-vicuna-13b](https://huggingface.co/junelee/wizard-vicuna-13b) tr
98
 
99
  Shout out to the open source AI/ML community, and everyone who helped me out.
100
 
101
- Note:
102
 
103
- An uncensored model has no guardrails.
104
 
105
  You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous object such as a knife, gun, lighter, or car.
106
 
107
  Publishing anything this model generates is the same as publishing it yourself.
108
 
109
- You are responsible for the content you publish, and you cannot blame the model any more than you can blame the knife, gun, lighter, or car for what you do with it.
 
9
  </div>
10
  <div style="display: flex; justify-content: space-between; width: 100%;">
11
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
12
+ <p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
13
  </div>
14
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
15
  <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
 
33
  * [4bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Uncensored-GPTQ).
34
  * [4bit and 5bit GGML models for CPU inference](https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Uncensored-GGML).
35
  * [float16 HF format model for GPU inference and further conversions](https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Uncensored-fp16).
36
+ *
37
  ## THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 19th 2023 - commit 2d5db48)!
38
 
39
  llama.cpp recently made another breaking change to its quantisation methods - https://github.com/ggerganov/llama.cpp/pull/1508
 
72
 
73
  For further support, and discussions on these models and AI in general, join us at:
74
 
75
+ [TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
76
 
77
  ## Thanks, and how to contribute.
78
 
 
82
 
83
  If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
84
 
85
+ Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
86
 
87
  * Patreon: https://patreon.com/TheBlokeAI
88
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
89
 
90
  **Patreon special mentions**: Aemon Algiz, Dmitriy Samsonov, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, Jonathan Leane, Talal Aujan, V. Lukas, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Sebastain Graf, Johann-Peter Hartman.
91
 
92
+ Thank you to all my generous patrons and donaters!
93
  <!-- footer end -->
94
 
95
  # Original model card: Eric Hartford's Wizard Vicuna 30B Uncensored
 
98
 
99
  Shout out to the open source AI/ML community, and everyone who helped me out.
100
 
101
+ Note:
102
 
103
+ An uncensored model has no guardrails.
104
 
105
  You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous object such as a knife, gun, lighter, or car.
106
 
107
  Publishing anything this model generates is the same as publishing it yourself.
108
 
109
+ You are responsible for the content you publish, and you cannot blame the model any more than you can blame the knife, gun, lighter, or car for what you do with it.