File size: 9,720 Bytes
1168330
295f50c
 
 
 
 
 
1168330
67ab202
015d262
 
 
ce07910
 
 
015d262
ce07910
 
015d262
ce07910
 
015d262
 
67ab202
295f50c
 
ec74e97
295f50c
 
 
 
 
 
2aa150d
295f50c
ce07910
295f50c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ce07910
67ab202
015d262
67ab202
ce07910
67ab202
ce07910
015d262
ce07910
67ab202
ce07910
67ab202
 
 
 
 
 
 
 
 
ce07910
67ab202
015d262
 
 
 
67ab202
 
015d262
 
 
67ab202
295f50c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
---
license: gpl
datasets:
- nomic-ai/gpt4all-j-prompt-generations
language:
- en
inference: false
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
    <div style="display: flex; flex-direction: column; align-items: flex-start;">
        <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
    </div>
    <div style="display: flex; flex-direction: column; align-items: flex-end;">
        <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
    </div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# GPT4All-13B-snoozy-GPTQ

This repo contains 4bit GPTQ format quantised models of [Nomic.AI's GPT4all-13B-snoozy](https://huggingface.co/nomic-ai/gpt4all-13b-snoozy).

It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).

## Repositories available

* [4bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/GPT4ALL-13B-snoozy-GPTQ).
* [4bit and 5bit GGML models for GPU inference](https://huggingface.co/TheBloke/GPT4ALL-13B-snoozy-GGML).
* [Nomic.AI's original model in float32 HF for GPU inference](https://huggingface.co/nomic-ai/gpt4all-13b-snoozy).

## How to easily download and use this model in text-generation-webui

Open the text-generation-webui UI as normal.

1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/GPT4All-13B-snoozy-GPTQ`.
3. Click **Download**.
4. Wait until it says it's finished downloading.
5. Click the **Refresh** icon next to **Model** in the top left.
6. In the **Model drop-down**: choose the model you just downloaded, `GPT4All-13B-snoozy-GPTQ`.
7. If you see an error in the bottom right, ignore it - it's temporary.
8. Fill out the `GPTQ parameters` on the right: `Bits = 4`, `Groupsize = 128`, `model_type = Llama`
9. Click **Save settings for this model** in the top right.
10. Click **Reload the Model** in the top right.
11. Once it says it's loaded, click the **Text Generation tab** and enter a prompt!

## Provided files

**Compatible file - GPT4ALL-13B-GPTQ-4bit-128g.compat.no-act-order.safetensors**

In the `main` branch - the default one - you will find `GPT4ALL-13B-GPTQ-4bit-128g.compat.no-act-order.safetensors`

This will work with all versions of GPTQ-for-LLaMa. It has maximum compatibility

It was created without the `--act-order` parameter. It may have slightly lower inference quality compared to the other file, but is guaranteed to work on all versions of GPTQ-for-LLaMa and text-generation-webui.

* `GPT4ALL-13B-GPTQ-4bit-128g.compat.no-act-order.safetensors`
  * Works with all versions of GPTQ-for-LLaMa code, both Triton and CUDA branches
  * Works with text-generation-webui one-click-installers
  * Parameters: Groupsize = 128g. No act-order.
  * Command used to create the GPTQ:
    ```
    CUDA_VISIBLE_DEVICES=0 python3 llama.py GPT4All-13B-snoozy c4 --wbits 4 --true-sequential --groupsize 128 --save_safetensors GPT4ALL-13B-GPTQ-4bit-128g.compat.no-act-order.safetensors
    ```


<!-- footer start -->
<!-- 200823 -->
## Discord

For further support, and discussions on these models and AI in general, join us at:

[TheBloke AI's Discord server](https://discord.gg/theblokeai)

## Thanks, and how to contribute.

Thanks to the [chirper.ai](https://chirper.ai) team!

I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.

If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.

Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.

* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI

**Special thanks to**: Aemon Algiz.

**Patreon special mentions**: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, 阿明, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieł, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter


Thank you to all my generous patrons and donaters!

And thank you again to a16z for their generous grant.

<!-- footer end -->
# Original Model Card for GPT4All-13b-snoozy

An Apache-2 licensed chatbot trained over a massive curated corpus of assistant interactions including word problems, multi-turn dialogue, code, poems, songs, and stories.

## Model Details

### Model Description

<!-- Provide a longer summary of what this model is. -->

This model has been finetuned from LLama 13B

- **Developed by:** [Nomic AI](https://home.nomic.ai)
- **Model Type:** A finetuned LLama 13B model on assistant style interaction data
- **Language(s) (NLP):** English
- **License:** Apache-2
- **Finetuned from model [optional]:** LLama 13B

This model was trained on `nomic-ai/gpt4all-j-prompt-generations` using `revision=v1.3-groovy`

### Model Sources [optional]

<!-- Provide the basic links for the model. -->

- **Repository:** [https://github.com/nomic-ai/gpt4all](https://github.com/nomic-ai/gpt4all)
- **Base Model Repository:** [https://github.com/facebookresearch/llama](https://github.com/facebookresearch/llama)
- **Demo [optional]:** [https://gpt4all.io/](https://gpt4all.io/)


### Results

Results on common sense reasoning benchmarks

```
  Model                     BoolQ       PIQA     HellaSwag   WinoGrande    ARC-e      ARC-c       OBQA
  ----------------------- ---------- ---------- ----------- ------------ ---------- ---------- ----------
  GPT4All-J 6B v1.0          73.4       74.8       63.4         64.7        54.9       36.0       40.2
  GPT4All-J v1.1-breezy      74.0       75.1       63.2         63.6        55.4       34.9       38.4
  GPT4All-J v1.2-jazzy       74.8       74.9       63.6         63.8        56.6       35.3       41.0
  GPT4All-J v1.3-groovy      73.6       74.3       63.8         63.5        57.7       35.0       38.8
  GPT4All-J Lora 6B          68.6       75.8       66.2         63.5        56.4       35.7       40.2
  GPT4All LLaMa Lora 7B      73.1       77.6       72.1         67.8        51.1       40.4       40.2
  GPT4All 13B snoozy        *83.3*      79.2       75.0        *71.3*       60.9       44.2       43.4
  Dolly 6B                   68.8       77.3       67.6         63.9        62.9       38.7       41.2
  Dolly 12B                  56.7       75.4       71.0         62.2       *64.6*      38.5       40.4
  Alpaca 7B                  73.9       77.2       73.9         66.1        59.8       43.3       43.4
  Alpaca Lora 7B             74.3      *79.3*      74.0         68.8        56.6       43.9       42.6
  GPT-J 6B                   65.4       76.2       66.2         64.1        62.2       36.6       38.2
  LLama 7B                   73.1       77.4       73.0         66.9        52.5       41.4       42.4
  LLama 13B                  68.5       79.1      *76.2*        70.1        60.0      *44.6*      42.2
  Pythia 6.9B                63.5       76.3       64.0         61.1        61.3       35.2       37.2
  Pythia 12B                 67.7       76.6       67.3         63.8        63.9       34.8       38.0
  Vicuña T5                  81.5       64.6       46.3         61.8        49.3       33.3       39.4
  Vicuña 13B                 81.5       76.8       73.3         66.7        57.4       42.7       43.6
  Stable Vicuña RLHF         82.3       78.6       74.1         70.9        61.0       43.5      *44.4*
  StableLM Tuned             62.5       71.2       53.6         54.8        52.4       31.1       33.4
  StableLM Base              60.1       67.4       41.2         50.1        44.9       27.0       32.0
```