Updating model files
Browse files
README.md
CHANGED
@@ -6,6 +6,17 @@ language:
|
|
6 |
- en
|
7 |
inference: false
|
8 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
# GPT4All-13B-snoozy-GGML
|
10 |
|
11 |
These files are GGML format model files of [Nomic.AI's GPT4all-13B-snoozy](https://huggingface.co/nomic-ai/gpt4all-13b-snoozy).
|
@@ -25,7 +36,7 @@ llama.cpp recently made another breaking change to its quantisation methods - ht
|
|
25 |
I have quantised the GGML files in this repo with the latest version. Therefore you will require llama.cpp compiled on May 19th or later (commit `2d5db48` or later) to use them.
|
26 |
|
27 |
For files compatible with the previous version of llama.cpp, please see branch `previous_llama_ggmlv2`.
|
28 |
-
|
29 |
## Provided files
|
30 |
| Name | Quant method | Bits | Size | RAM required | Use case |
|
31 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
@@ -53,7 +64,17 @@ If you want to have a chat-style conversation, replace the `-p <PROMPT>` argumen
|
|
53 |
|
54 |
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
|
55 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
56 |
|
|
|
|
|
|
|
57 |
# Original Model Card for GPT4All-13b-snoozy
|
58 |
|
59 |
An Apache-2 licensed chatbot trained over a massive curated corpus of assistant interactions including word problems, multi-turn dialogue, code, poems, songs, and stories.
|
|
|
6 |
- en
|
7 |
inference: false
|
8 |
---
|
9 |
+
<div style="width: 100%;">
|
10 |
+
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
|
11 |
+
</div>
|
12 |
+
<div style="display: flex; justify-content: space-between; width: 100%;">
|
13 |
+
<div style="display: flex; flex-direction: column; align-items: flex-start;">
|
14 |
+
<p><a href="https://discord.gg/UBgz4VXf">Chat & support: my new Discord server</a></p>
|
15 |
+
</div>
|
16 |
+
<div style="display: flex; flex-direction: column; align-items: flex-end;">
|
17 |
+
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? Patreon coming soon!</a></p>
|
18 |
+
</div>
|
19 |
+
</div>
|
20 |
# GPT4All-13B-snoozy-GGML
|
21 |
|
22 |
These files are GGML format model files of [Nomic.AI's GPT4all-13B-snoozy](https://huggingface.co/nomic-ai/gpt4all-13b-snoozy).
|
|
|
36 |
I have quantised the GGML files in this repo with the latest version. Therefore you will require llama.cpp compiled on May 19th or later (commit `2d5db48` or later) to use them.
|
37 |
|
38 |
For files compatible with the previous version of llama.cpp, please see branch `previous_llama_ggmlv2`.
|
39 |
+
|
40 |
## Provided files
|
41 |
| Name | Quant method | Bits | Size | RAM required | Use case |
|
42 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
|
|
64 |
|
65 |
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
|
66 |
|
67 |
+
## Want to support my work?
|
68 |
+
|
69 |
+
I've had a lot of people ask if they can contribute. I love providing models and helping people, but it is starting to rack up pretty big cloud computing bills.
|
70 |
+
|
71 |
+
So if you're able and willing to contribute, it'd be most gratefully received and will help me to keep providing models, and work on various AI projects.
|
72 |
+
|
73 |
+
Donaters will get priority support on any and all AI/LLM/model questions, and I'll gladly quantise any model you'd like to try.
|
74 |
|
75 |
+
* Patreon: coming soon! (just awaiting approval)
|
76 |
+
* Ko-Fi: https://ko-fi.com/TheBlokeAI
|
77 |
+
* Discord: https://discord.gg/UBgz4VXf
|
78 |
# Original Model Card for GPT4All-13b-snoozy
|
79 |
|
80 |
An Apache-2 licensed chatbot trained over a massive curated corpus of assistant interactions including word problems, multi-turn dialogue, code, poems, songs, and stories.
|