TheBloke's picture
New GGMLv3 format for breaking llama.cpp change May 19th commit 2d5db48
72e1fb5
|
raw
history blame
3.83 kB
metadata
license: other
inference: false

gpt4-x-vicuna-13B-GGML

These files are GGML format model files of NousResearch's gpt4-x-vicuna-13b.

GGML files are for CPU inference using llama.cpp.

Repositories available

THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 19th 2023 - commit 2d5db48)!

llama.cpp recently made another breaking change to its quantisation methods - https://github.com/ggerganov/llama.cpp/pull/1508

I have quantised the GGML files in this repo with the latest version. Therefore you will require llama.cpp compiled on May 19th or later (commit 2d5db48 or later) to use them.

For files compatible with the previous version of llama.cpp, please see branch previous_llama_ggmlv2.

Provided files

Name Quant method Bits Size RAM required Use case
gpt4-x-vicuna-13B.ggmlv3.q4_0.bin q4_0 4bit 8.14GB 10GB 4-bit.
gpt4-x-vicuna-13B.ggmlv3.q4_1.bin q4_1 4bit 8.95GB 10GB 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models.
gpt4-x-vicuna-13B.ggmlv3.q5_0.bin q5_0 5bit 8.95GB 11GB 5-bit. Higher accuracy, higher resource usage and slower inference.
gpt4-x-vicuna-13B.ggmlv3.q5_1.bin q5_1 5bit 9.76GB 12GB 5-bit. Even higher accuracy, higher resource usage and slower inference.
gpt4-x-vicuna-13B.ggmlv3.q8_0.bin q8_0 8bit 16GB 18GB 8-bit. Almost indistinguishable from float16. Huge resource use and slow. Not recommended for normal use.

How to run in llama.cpp

I use the following command line; adjust for your tastes and needs:

./main -t 12 -m gpt4-x-vicuna-13B.ggmlv3.q4_2.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
Write a story about llamas
### Response:"

Change -t 12 to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use -t 8.

If you want to have a chat-style conversation, replace the -p <PROMPT> argument with -i -ins

How to run in text-generation-webui

Further instructions here: text-generation-webui/docs/llama.cpp-models.md.

Note: at this time text-generation-webui may not support the new May 19th llama.cpp quantisation methods for q4_0, q4_1 and q8_0 files.

Original model card

As a base model used https://huggingface.co/eachadea/vicuna-13b-1.1

Finetuned on Teknium's GPTeacher dataset, unreleased Roleplay v2 dataset, GPT-4-LLM dataset, and Nous Research Instruct Dataset

Approx 180k instructions, all from GPT-4, all cleaned of any OpenAI censorship/"As an AI Language Model" etc.

Base model still has OpenAI censorship. Soon, a new version will be released with cleaned vicuna from https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltere

Trained on 8 A100-80GB GPUs for 5 epochs following Alpaca deepspeed training code.

Nous Research Instruct Dataset will be released soon.

GPTeacher, Roleplay v2 by https://huggingface.co/teknium

Wizard LM by https://github.com/nlpxucan

Nous Research Instruct Dataset by https://huggingface.co/karan4d and https://huggingface.co/huemin

Compute provided by our project sponsor https://redmond.ai/