TheBloke commited on
Commit
3d1f699
1 Parent(s): 952bfcd

Updating model files

Browse files
Files changed (1) hide show
  1. README.md +21 -3
README.md CHANGED
@@ -11,6 +11,17 @@ datasets:
11
  - tatsu-lab/alpaca
12
  inference: false
13
  ---
 
 
 
 
 
 
 
 
 
 
 
14
  # StableVicuna-13B-GGML
15
 
16
  This is GGML format quantised 4bit and 5bit models of [CarperAI's StableVicuna 13B](https://huggingface.co/CarperAI/stable-vicuna-13b-delta).
@@ -65,10 +76,17 @@ GGML models can be loaded into text-generation-webui by installing the llama.cpp
65
 
66
  Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
67
 
68
- Note: at this time text-generation-webui may not support the new May 19th llama.cpp quantisation methods for q4_0, q4_1 and q8_0 files.
69
 
70
- **Thireus** has written a [great guide on how to update it to the latest llama.cpp code](https://huggingface.co/TheBloke/wizardLM-7B-GGML/discussions/5) which may help with getting the files working in text-gen-ui sooner.
71
 
 
 
 
 
 
 
 
72
  # Original StableVicuna-13B model card
73
 
74
  ## Model Description
@@ -216,7 +234,7 @@ This work would not have been possible without the support of [Stability AI](htt
216
  Zack Witten and
217
  alexandremuzio and
218
  crumb},
219
- title = {{CarperAI/trlx: v0.6.0: LLaMa (Alpaca), Benchmark
220
  Util, T5 ILQL, Tests}},
221
  month = mar,
222
  year = 2023,
 
11
  - tatsu-lab/alpaca
12
  inference: false
13
  ---
14
+ <div style="width: 100%;">
15
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
16
+ </div>
17
+ <div style="display: flex; justify-content: space-between; width: 100%;">
18
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
19
+ <p><a href="https://discord.gg/UBgz4VXf">Chat & support: my new Discord server</a></p>
20
+ </div>
21
+ <div style="display: flex; flex-direction: column; align-items: flex-end;">
22
+ <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? Patreon coming soon!</a></p>
23
+ </div>
24
+ </div>
25
  # StableVicuna-13B-GGML
26
 
27
  This is GGML format quantised 4bit and 5bit models of [CarperAI's StableVicuna 13B](https://huggingface.co/CarperAI/stable-vicuna-13b-delta).
 
76
 
77
  Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
78
 
79
+ ## Want to support my work?
80
 
81
+ I've had a lot of people ask if they can contribute. I love providing models and helping people, but it is starting to rack up pretty big cloud computing bills.
82
 
83
+ So if you're able and willing to contribute, it'd be most gratefully received and will help me to keep providing models, and work on various AI projects.
84
+
85
+ Donaters will get priority support on any and all AI/LLM/model questions, and I'll gladly quantise any model you'd like to try.
86
+
87
+ * Patreon: coming soon! (just awaiting approval)
88
+ * Ko-Fi: https://ko-fi.com/TheBlokeAI
89
+ * Discord: https://discord.gg/UBgz4VXf
90
  # Original StableVicuna-13B model card
91
 
92
  ## Model Description
 
234
  Zack Witten and
235
  alexandremuzio and
236
  crumb},
237
+ title = {{CarperAI/trlx: v0.6.0: LLaMa (Alpaca), Benchmark
238
  Util, T5 ILQL, Tests}},
239
  month = mar,
240
  year = 2023,