wizardLM-7B-GGML / README.md
TheBloke's picture
Update README.md
8a694bb
metadata
license: other
inference: false

WizardLM: An Instruction-following LLM Using Evol-Instruct

These files are the result of merging the delta weights with the original Llama7B model.

The code for merging is provided in the WizardLM official Github repo.

WizardLM-7B GGML

This repo contains GGML files for for CPU inference using llama.cpp.

Other repositories available

REQUIRES LATEST LLAMA.CPP (May 12th 2023 - commit b9fd7ee)!

llama.cpp recently made a breaking change to its quantisation methods.

I have re-quantised the GGML files in this repo. Therefore you will require llama.cpp compiled on May 12th or later (commit b9fd7ee or later) to use them.

The previous files, which will still work in older versions of llama.cpp, can be found in branch previous_llama.

Provided files

Name Quant method Bits Size RAM required Use case
WizardLM-7B.GGML.q4_0.bin q4_0 4bit 4.2GB 6GB 4bit.
WizardLM-7B.GGML.q5_0.bin q5_0 5bit 4.63GB 7GB Higher quality inference than 4bit, at cost of slightly higher resources.
WizardLM-7B.GGML.q5_1.bin q5_1 5bit 5.0GB 7GB Higher quality and resource usage again.

How to run in llama.cpp

I use the following command line; adjust for your tastes and needs:

./main -t 18 -m WizardLM-7B.GGML.q4_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
Write a story about llamas
### Response:"

Change -t 18 to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use -t 8.

If you want to have a chat-style conversation, replace the -p <PROMPT> argument with -i -ins

How to run in text-generation-webui

Further instructions here: text-generation-webui/docs/llama.cpp-models.md.

Note: at this time text-generation-webui may not support the new llama.cpp quantisation methods (May 12th)

Thireus has written a great guide on how to update it to the latest llama.cpp code to get support for thew newer files quicker.

Original model info

Overview of Evol-Instruct Evol-Instruct is a novel method using LLMs instead of humans to automatically mass-produce open-domain instructions of various difficulty levels and skills range, to improve the performance of LLMs.

info info