TheBloke commited on
Commit
963d84a
1 Parent(s): a2e2022

Updating model files

Browse files
Files changed (1) hide show
  1. README.md +22 -2
README.md CHANGED
@@ -6,6 +6,17 @@ tags:
6
  - llama
7
  inference: false
8
  ---
 
 
 
 
 
 
 
 
 
 
 
9
  # Wizard-Vicuna-13B-GGML
10
 
11
  This is GGML format quantised 4bit and 5bit models of [junelee's wizard-vicuna 13B](https://huggingface.co/junelee/wizard-vicuna-13b).
@@ -51,8 +62,17 @@ GGML models can be loaded into text-generation-webui by installing the llama.cpp
51
 
52
  Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
53
 
54
- Note: at this time text-generation-webui may not support the new May 19th llama.cpp quantisation methods for q4_0, q4_1 and q8_0 files.
55
 
 
 
 
 
 
 
 
 
 
56
  # Original WizardVicuna-13B model card
57
 
58
  Github page: https://github.com/melodysdreamj/WizardVicunaLM
@@ -67,7 +87,7 @@ I am a big fan of the ideas behind WizardLM and VicunaLM. I particularly like th
67
  ![](https://user-images.githubusercontent.com/21379657/236088663-3fa212c9-0112-4d44-9b01-f16ea093cb67.png)
68
 
69
 
70
- ### Detail
71
 
72
  The questions presented here are not from rigorous tests, but rather, I asked a few questions and requested GPT-4 to score them. The models compared were ChatGPT 3.5, WizardVicunaLM, VicunaLM, and WizardLM, in that order.
73
 
 
6
  - llama
7
  inference: false
8
  ---
9
+ <div style="width: 100%;">
10
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
11
+ </div>
12
+ <div style="display: flex; justify-content: space-between; width: 100%;">
13
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
14
+ <p><a href="https://discord.gg/UBgz4VXf">Chat & support: my new Discord server</a></p>
15
+ </div>
16
+ <div style="display: flex; flex-direction: column; align-items: flex-end;">
17
+ <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? Patreon coming soon!</a></p>
18
+ </div>
19
+ </div>
20
  # Wizard-Vicuna-13B-GGML
21
 
22
  This is GGML format quantised 4bit and 5bit models of [junelee's wizard-vicuna 13B](https://huggingface.co/junelee/wizard-vicuna-13b).
 
62
 
63
  Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
64
 
65
+ ## Want to support my work?
66
 
67
+ I've had a lot of people ask if they can contribute. I love providing models and helping people, but it is starting to rack up pretty big cloud computing bills.
68
+
69
+ So if you're able and willing to contribute, it'd be most gratefully received and will help me to keep providing models, and work on various AI projects.
70
+
71
+ Donaters will get priority support on any and all AI/LLM/model questions, and I'll gladly quantise any model you'd like to try.
72
+
73
+ * Patreon: coming soon! (just awaiting approval)
74
+ * Ko-Fi: https://ko-fi.com/TheBlokeAI
75
+ * Discord: https://discord.gg/UBgz4VXf
76
  # Original WizardVicuna-13B model card
77
 
78
  Github page: https://github.com/melodysdreamj/WizardVicunaLM
 
87
  ![](https://user-images.githubusercontent.com/21379657/236088663-3fa212c9-0112-4d44-9b01-f16ea093cb67.png)
88
 
89
 
90
+ ### Detail
91
 
92
  The questions presented here are not from rigorous tests, but rather, I asked a few questions and requested GPT-4 to score them. The models compared were ChatGPT 3.5, WizardVicunaLM, VicunaLM, and WizardLM, in that order.
93