Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,88 @@
|
|
1 |
---
|
2 |
-
license:
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
license: gpl
|
3 |
---
|
4 |
+
|
5 |
+
# gpt4-x-vicuna-13B-GPTQ
|
6 |
+
|
7 |
+
This repo contains 4bit GPTQ format quantised models of [NousResearch's gpt4-x-vicuna-13b](https://huggingface.co/NousResearch/gpt4-x-vicuna-13b).
|
8 |
+
|
9 |
+
It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
|
10 |
+
|
11 |
+
## Repositories available
|
12 |
+
|
13 |
+
* [4bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/gpt4-x-vicuna-13B-GPTQ).
|
14 |
+
* [4bit and 5bit GGML models for CPU inference](https://huggingface.co/TheBloke/gpt4-x-vicuna-13B-GGML).
|
15 |
+
* [float16 models in HF format for GPU inference](https://huggingface.co/TheBloke/gpt4-x-vicuna-13B-HF).
|
16 |
+
|
17 |
+
## Provided files
|
18 |
+
| Name | Quant method | Bits | Size | RAM required | Use case |
|
19 |
+
| ---- | ---- | ---- | ---- | ---- | ----- |
|
20 |
+
`gpt4-x-vicuna-13B.ggml.q4_0.bin` | q4_0 | 4bit | 8.14GB | 10GB | Maximum compatibility |
|
21 |
+
`gpt4-x-vicuna-13B.ggml.q4_2.bin` | q4_2 | 4bit | 8.14GB | 10GB | Best compromise between resources, speed and quality |
|
22 |
+
`gpt4-x-vicuna-13B.ggml.q5_0.bin` | q5_0 | 5bit | 8.95GB | 11GB | Brand new 5bit method. Potentially higher quality than 4bit, at cost of slightly higher resources. |
|
23 |
+
`gpt4-x-vicuna-13B.ggml.q5_1.bin` | q5_1 | 5bit | 9.76GB | 12GB | Brand new 5bit method. Slightly higher resource usage than q5_0.|
|
24 |
+
|
25 |
+
* The q4_0 file provides lower quality, but maximal compatibility. It will work with past and future versions of llama.cpp
|
26 |
+
* The q4_2 file offers the best combination of performance and quality. This format is still subject to change and there may be compatibility issues, see below.
|
27 |
+
* The q5_0 file is using brand new 5bit method released 26th April. This is the 5bit equivalent of q4_0.
|
28 |
+
* The q5_1 file is using brand new 5bit method released 26th April. This is the 5bit equivalent of q4_1.
|
29 |
+
|
30 |
+
## q4_2 compatibility
|
31 |
+
|
32 |
+
q4_2 is a relatively new 4bit quantisation method offering improved quality. However they are still under development and their formats are subject to change.
|
33 |
+
|
34 |
+
In order to use these files you will need to use recent llama.cpp code. And it's possible that future updates to llama.cpp could require that these files are re-generated.
|
35 |
+
|
36 |
+
If and when the q4_2 file no longer works with recent versions of llama.cpp I will endeavour to update it.
|
37 |
+
|
38 |
+
If you want to ensure guaranteed compatibility with a wide range of llama.cpp versions, use the q4_0 file.
|
39 |
+
|
40 |
+
## q5_0 and q5_1 compatibility
|
41 |
+
|
42 |
+
These new methods were released to llama.cpp on 26th April. You will need to pull the latest llama.cpp code and rebuild to be able to use them.
|
43 |
+
|
44 |
+
Don't expect any third-party UIs/tools to support them yet.
|
45 |
+
|
46 |
+
## How to run in `llama.cpp`
|
47 |
+
|
48 |
+
I use the following command line; adjust for your tastes and needs:
|
49 |
+
|
50 |
+
```
|
51 |
+
./main -t 12 -m gpt4-x-vicuna-13B.ggml.q4_2.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.
|
52 |
+
### Instruction:
|
53 |
+
Write a story about llamas
|
54 |
+
### Response:"
|
55 |
+
```
|
56 |
+
Change `-t 12` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
|
57 |
+
|
58 |
+
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
|
59 |
+
|
60 |
+
## How to run in `text-generation-webui`
|
61 |
+
|
62 |
+
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
|
63 |
+
|
64 |
+
Note: at this time text-generation-webui will not support the new q5 quantisation methods.
|
65 |
+
|
66 |
+
**Thireus** has written a [great guide on how to update it to the latest llama.cpp code](https://huggingface.co/TheBloke/wizardLM-7B-GGML/discussions/5) so that these files can be used in the UI.
|
67 |
+
|
68 |
+
# Original model card
|
69 |
+
|
70 |
+
As a base model used https://huggingface.co/eachadea/vicuna-13b-1.1
|
71 |
+
|
72 |
+
Finetuned on Teknium's GPTeacher dataset, unreleased Roleplay v2 dataset, GPT-4-LLM dataset, and Nous Research Instruct Dataset
|
73 |
+
|
74 |
+
Approx 180k instructions, all from GPT-4, all cleaned of any OpenAI censorship/"As an AI Language Model" etc.
|
75 |
+
|
76 |
+
Base model still has OpenAI censorship. Soon, a new version will be released with cleaned vicuna from https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltere
|
77 |
+
|
78 |
+
Trained on 8 A100-80GB GPUs for 5 epochs following Alpaca deepspeed training code.
|
79 |
+
|
80 |
+
Nous Research Instruct Dataset will be released soon.
|
81 |
+
|
82 |
+
GPTeacher, Roleplay v2 by https://huggingface.co/teknium
|
83 |
+
|
84 |
+
Wizard LM by https://github.com/nlpxucan
|
85 |
+
|
86 |
+
Nous Research Instruct Dataset by https://huggingface.co/karan4d and https://huggingface.co/huemin
|
87 |
+
|
88 |
+
Compute provided by our project sponsor https://redmond.ai/
|