TheBloke commited on
Commit
a88e72e
1 Parent(s): 237c4ab

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +135 -1
README.md CHANGED
@@ -1,3 +1,137 @@
1
  ---
2
- license: other
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: gpl
3
+ datasets:
4
+ - nomic-ai/gpt4all-j-prompt-generations
5
+ language:
6
+ - en
7
+ inference: false
8
  ---
9
+ # GPT4All-13B-snoozy-GGML
10
+
11
+ These files are GGML format model files of [Nomic.AI's GPT4all-13B-snoozy](https://huggingface.co/nomic-ai/gpt4all-13b-snoozy).
12
+
13
+ GGML files are for CPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp).
14
+
15
+ ## Repositories available
16
+
17
+ * [4bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/GPT4ALL-13B-snoozy-GPTQ).
18
+ * [4bit and 5bit GGML models for GPU inference](https://huggingface.co/TheBloke/GPT4ALL-13B-snoozy-GGML).
19
+ * [Nomic.AI's original model in float32 HF for GPU inference](https://huggingface.co/nomic-ai/gpt4all-13b-snoozy).
20
+
21
+
22
+ ## Provided files
23
+ | Name | Quant method | Bits | Size | RAM required | Use case |
24
+ | ---- | ---- | ---- | ---- | ---- | ----- |
25
+ `GPT4All-13B-snoozy.q4_0.bin` | q4_0 | 4bit | 8.14GB | 10GB | Maximum compatibility |
26
+ `GPT4All-13B-snoozy.q4_2.bin` | q4_2 | 4bit | 8.14GB | 10GB | Best compromise between resources, speed and quality |
27
+ `GPT4All-13B-snoozy.q5_0.bin` | q5_0 | 5bit | 8.95GB | 11GB | Brand new 5bit method. Potentially higher quality than 4bit, at cost of slightly higher resources. |
28
+ `GPT4All-13B-snoozy.q5_1.bin` | q5_1 | 5bit | 9.76GB | 12GB | Brand new 5bit method. Slightly higher resource usage than q5_0.|
29
+
30
+ * The q4_0 file provides lower quality, but maximal compatibility. It will work with past and future versions of llama.cpp
31
+ * The q4_2 file offers the best combination of performance and quality. This format is still subject to change and there may be compatibility issues, see below.
32
+ * The q5_0 file is using brand new 5bit method released 26th April. This is the 5bit equivalent of q4_0.
33
+ * The q5_1 file is using brand new 5bit method released 26th April. This is the 5bit equivalent of q4_1.
34
+
35
+ ## q4_2 compatibility
36
+
37
+ q4_2 is a relatively new 4bit quantisation method offering improved quality. However they are still under development and their formats are subject to change.
38
+
39
+ In order to use these files you will need to use recent llama.cpp code. And it's possible that future updates to llama.cpp could require that these files are re-generated.
40
+
41
+ If and when the q4_2 file no longer works with recent versions of llama.cpp I will endeavour to update it.
42
+
43
+ If you want to ensure guaranteed compatibility with a wide range of llama.cpp versions, use the q4_0 file.
44
+
45
+ ## q5_0 and q5_1 compatibility
46
+
47
+ These new methods were released to llama.cpp on 26th April. You will need to pull the latest llama.cpp code and rebuild to be able to use them.
48
+
49
+ Don't expect any third-party UIs/tools to support them yet.
50
+
51
+ ## How to run in `llama.cpp`
52
+
53
+ I use the following command line; adjust for your tastes and needs:
54
+
55
+ ```
56
+ ./main -t 12 -m GPT4All-13B-snoozy.q4_2.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.
57
+ ### Instruction:
58
+ Write a story about llamas
59
+ ### Response:"
60
+ ```
61
+ Change `-t 12` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
62
+
63
+ If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
64
+
65
+ ## How to run in `text-generation-webui`
66
+
67
+ Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
68
+
69
+ Note: at this time text-generation-webui will not support the new q5 quantisation methods.
70
+
71
+ **Thireus** has written a [great guide on how to update it to the latest llama.cpp code](https://huggingface.co/TheBloke/wizardLM-7B-GGML/discussions/5) so that these files can be used in the UI.
72
+
73
+ ## Repositories available
74
+
75
+ * [4bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/GPT4ALL-13B-snoozy-GPTQ).
76
+ * [4bit and 5bit GGML models for GPU inference](https://huggingface.co/TheBloke/GPT4ALL-13B-snoozy-GGML).
77
+ * [Nomic.AI's original model in float32 HF for GPU inference](https://huggingface.co/nomic-ai/gpt4all-13b-snoozy).
78
+
79
+
80
+ # Original Model Card for GPT4All-13b-snoozy
81
+
82
+ An Apache-2 licensed chatbot trained over a massive curated corpus of assistant interactions including word problems, multi-turn dialogue, code, poems, songs, and stories.
83
+
84
+ ## Model Details
85
+
86
+ ### Model Description
87
+
88
+ <!-- Provide a longer summary of what this model is. -->
89
+
90
+ This model has been finetuned from LLama 13B
91
+
92
+ - **Developed by:** [Nomic AI](https://home.nomic.ai)
93
+ - **Model Type:** A finetuned LLama 13B model on assistant style interaction data
94
+ - **Language(s) (NLP):** English
95
+ - **License:** Apache-2
96
+ - **Finetuned from model [optional]:** LLama 13B
97
+
98
+ This model was trained on `nomic-ai/gpt4all-j-prompt-generations` using `revision=v1.3-groovy`
99
+
100
+ ### Model Sources [optional]
101
+
102
+ <!-- Provide the basic links for the model. -->
103
+
104
+ - **Repository:** [https://github.com/nomic-ai/gpt4all](https://github.com/nomic-ai/gpt4all)
105
+ - **Base Model Repository:** [https://github.com/facebookresearch/llama](https://github.com/facebookresearch/llama)
106
+ - **Demo [optional]:** [https://gpt4all.io/](https://gpt4all.io/)
107
+
108
+
109
+ ### Results
110
+
111
+ Results on common sense reasoning benchmarks
112
+
113
+ ```
114
+ Model BoolQ PIQA HellaSwag WinoGrande ARC-e ARC-c OBQA
115
+ ----------------------- ---------- ---------- ----------- ------------ ---------- ---------- ----------
116
+ GPT4All-J 6B v1.0 73.4 74.8 63.4 64.7 54.9 36.0 40.2
117
+ GPT4All-J v1.1-breezy 74.0 75.1 63.2 63.6 55.4 34.9 38.4
118
+ GPT4All-J v1.2-jazzy 74.8 74.9 63.6 63.8 56.6 35.3 41.0
119
+ GPT4All-J v1.3-groovy 73.6 74.3 63.8 63.5 57.7 35.0 38.8
120
+ GPT4All-J Lora 6B 68.6 75.8 66.2 63.5 56.4 35.7 40.2
121
+ GPT4All LLaMa Lora 7B 73.1 77.6 72.1 67.8 51.1 40.4 40.2
122
+ GPT4All 13B snoozy *83.3* 79.2 75.0 *71.3* 60.9 44.2 43.4
123
+ Dolly 6B 68.8 77.3 67.6 63.9 62.9 38.7 41.2
124
+ Dolly 12B 56.7 75.4 71.0 62.2 *64.6* 38.5 40.4
125
+ Alpaca 7B 73.9 77.2 73.9 66.1 59.8 43.3 43.4
126
+ Alpaca Lora 7B 74.3 *79.3* 74.0 68.8 56.6 43.9 42.6
127
+ GPT-J 6B 65.4 76.2 66.2 64.1 62.2 36.6 38.2
128
+ LLama 7B 73.1 77.4 73.0 66.9 52.5 41.4 42.4
129
+ LLama 13B 68.5 79.1 *76.2* 70.1 60.0 *44.6* 42.2
130
+ Pythia 6.9B 63.5 76.3 64.0 61.1 61.3 35.2 37.2
131
+ Pythia 12B 67.7 76.6 67.3 63.8 63.9 34.8 38.0
132
+ Vicuña T5 81.5 64.6 46.3 61.8 49.3 33.3 39.4
133
+ Vicuña 13B 81.5 76.8 73.3 66.7 57.4 42.7 43.6
134
+ Stable Vicuña RLHF 82.3 78.6 74.1 70.9 61.0 43.5 *44.4*
135
+ StableLM Tuned 62.5 71.2 53.6 54.8 52.4 31.1 33.4
136
+ StableLM Base 60.1 67.4 41.2 50.1 44.9 27.0 32.0
137
+ ```