guanaco-33B-GGML / README.md
TheBloke's picture
Initial GGML model commit
ff51275
|
raw
history blame
2.71 kB
metadata
inference: false
license: other

Tim Dettmers' Guanaco 33B GGML

These files are GGML format model files for Tim Dettmers' Guanaco 33B.

GGML files are for CPU inference using llama.cpp and libraries and UIs which support this format, such as:

Other repositories available

THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 19th 2023 - commit 2d5db48)!

llama.cpp recently made another breaking change to its quantisation methods - https://github.com/ggerganov/llama.cpp/pull/1508

I have quantised the GGML files in this repo with the latest version. Therefore you will require llama.cpp compiled on May 19th or later (commit 2d5db48 or later) to use them.

Provided files

Name Quant method Bits Size RAM required Use case
guanaco-33B.ggmlv3.q4_0.bin q4_0 4 18.30 GB 20.80 GB 4-bit.
guanaco-33B.ggmlv3.q8_0.bin q8_0 8 34.56 GB 37.06 GB 8-bit. Almost indistinguishable from float16. Huge resource use and slow. Not recommended for normal use.

How to run in llama.cpp

I use the following command line; adjust for your tastes and needs:

./main -t 12 -m guanaco-33B.v3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
Write a story about llamas
### Response:"

Change -t 12 to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use -t 8.

If you want to have a chat-style conversation, replace the -p <PROMPT> argument with -i -ins

How to run in text-generation-webui

Further instructions here: text-generation-webui/docs/llama.cpp-models.md.

Note: at this time text-generation-webui may not support the new May 19th llama.cpp quantisation methods for q4_0, q4_1 and q8_0 files.

Original model card: Tim Dettmers' Guanaco 33B