TheBloke's picture
Update README.md
67e539e
|
raw
history blame
8.3 kB
metadata
language:
  - en
tags:
  - causal-lm
  - llama
inference: false

Wizard-Vicuna-13B-GGML

This is GGML format quantised 4bit and 5bit models of junelee's wizard-vicuna 13B.

It is the result of quantising to 4bit and 5bit GGML for CPU inference using llama.cpp.

Repositories available

Provided files

Name Quant method Bits Size RAM required Use case
wizard-vicuna-13B.ggml.q4_0.bin q4_0 4bit 8.14GB 10.5GB Maximum compatibility
wizard-vicuna-13B.ggml.q4_2.bin q4_2 4bit 8.14GB 10.5GB Best compromise between resources, speed and quality
wizard-vicuna-13B.ggml.q5_0.bin q5_0 5bit 8.95GB 11.0GB Brand new 5bit method. Potentially higher quality than 4bit, at cost of slightly higher resources.
wizard-vicuna-13B.ggml.q5_1.bin q5_1 5bit 9.76GB 12.25GB Brand new 5bit method. Slightly higher resource usage than q5_0.
  • The q4_0 file provides lower quality, but maximal compatibility. It will work with past and future versions of llama.cpp
  • The q4_2 file offers the best combination of performance and quality. This format is still subject to change and there may be compatibility issues, see below.
  • The q5_0 file is using brand new 5bit method released 26th April. This is the 5bit equivalent of q4_0.
  • The q5_1 file is using brand new 5bit method released 26th April. This is the 5bit equivalent of q4_1.

q4_2 compatibility

q4_2 is a relatively new 4bit quantisation method offering improved quality. However they are still under development and their formats are subject to change.

In order to use these files you will need to use recent llama.cpp code. And it's possible that future updates to llama.cpp could require that these files are re-generated.

If and when the q4_2 file no longer works with recent versions of llama.cpp I will endeavour to update it.

If you want to ensure guaranteed compatibility with a wide range of llama.cpp versions, use the q4_0 file.

q5_0 and q5_1 compatibility

These new methods were released to llama.cpp on 26th April. You will need to pull the latest llama.cpp code and rebuild to be able to use them.

Third party tools/UIs may or may not support them. Check you're using the latest version of any such tools and ask the devs for advice if you find you can't load q5 files.

How to run in llama.cpp

I use the following command line; adjust for your tastes and needs:

./main -t 18 -m wizard-vicuna-13B.ggml.q4_2.bi --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: write a story about llamas ### Response:"

Change -t 18 to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use -t 8.

How to run in text-generation-webui

GGML models can be loaded into text-generation-webui by installing the llama.cpp module, then placing the ggml model file in a model folder as usual.

Further instructions here: text-generation-webui/docs/llama.cpp-models.md.

Note: at this time text-generation-webui may not support the new q5 quantisation methods.

Thireus has written a great guide on how to update it to the latest llama.cpp code so that these files can be used in the UI.

Original WizardVicuna-13B model card

Github page: https://github.com/melodysdreamj/WizardVicunaLM

WizardVicunaLM

Wizard's dataset + ChatGPT's conversation extension + Vicuna's tuning method

I am a big fan of the ideas behind WizardLM and VicunaLM. I particularly like the idea of WizardLM handling the dataset itself more deeply and broadly, as well as VicunaLM overcoming the limitations of single-turn conversations by introducing multi-round conversations. As a result, I combined these two ideas to create WizardVicunaLM. This project is highly experimental and designed for proof of concept, not for actual usage.

Benchmark

Approximately 7% performance improvement over VicunaLM

Detail

The questions presented here are not from rigorous tests, but rather, I asked a few questions and requested GPT-4 to score them. The models compared were ChatGPT 3.5, WizardVicunaLM, VicunaLM, and WizardLM, in that order.

gpt3.5 wizard-vicuna-13b vicuna-13b wizard-7b link
Q1 95 90 85 88 link
Q2 95 97 90 89 link
Q3 85 90 80 65 link
Q4 90 85 80 75 link
Q5 90 85 80 75 link
Q6 92 85 87 88 link
Q7 95 90 85 92 link
Q8 90 85 75 70 link
Q9 92 85 70 60 link
Q10 90 80 75 85 link
Q11 90 85 75 65 link
Q12 85 90 80 88 link
Q13 90 95 88 85 link
Q14 94 89 90 91 link
Q15 90 85 88 87 link
91 88 82 80

Principle

We adopted the approach of WizardLM, which is to extend a single problem more in-depth. However, instead of using individual instructions, we expanded it using Vicuna's conversation format and applied Vicuna's fine-tuning techniques.

Turning a single command into a rich conversation is what we've done here.

After creating the training data, I later trained it according to the Vicuna v1.1 training method.

Detailed Method

First, we explore and expand various areas in the same topic using the 7K conversations created by WizardLM. However, we made it in a continuous conversation format instead of the instruction format. That is, it starts with WizardLM's instruction, and then expands into various areas in one conversation using ChatGPT 3.5.

After that, we applied the following model using Vicuna's fine-tuning format.

Training Process

Trained with 8 A100 GPUs for 35 hours.

Weights

You can see the dataset we used for training and the 13b model in the huggingface.

Conclusion

If we extend the conversation to gpt4 32K, we can expect a dramatic improvement, as we can generate 8x more, more accurate and richer conversations.

License

The model is licensed under the LLaMA model, and the dataset is licensed under the terms of OpenAI because it uses ChatGPT. Everything else is free.

Author

JUNE LEE - He is active in Songdo Artificial Intelligence Study and GDG Songdo.