File size: 2,529 Bytes
22cae5b 59371f8 22cae5b 59371f8 ff1c30e 59371f8 37bde1f 59371f8 858f870 950c144 3e10dd5 59371f8 858f870 59371f8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 |
---
license: other
inference: false
---
# WizardLM: An Instruction-following LLM Using Evol-Instruct
These files are the result of merging the [delta weights](https://huggingface.co/victor123/WizardLM) with the original Llama7B model.
The code for merging is provided in the [WizardLM official Github repo](https://github.com/nlpxucan/WizardLM).
## WizardLM-7B GGML
This repo contains GGML files for for CPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp).
## Other repositories available
* [4bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/wizardLM-7B-GPTQ)
* [Unquantised model in HF format](https://huggingface.co/TheBloke/wizardLM-7B-HF)
## Provided files
| Name | Quant method | Bits | Size | RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
`WizardLM-7B.GGML.q4_0.bin` | q4_0 | 4bit | 4.0GB | 6GB | Maximum compatibility |
`WizardLM-7B.GGML.q4_2.bin` | q4_2 | 4bit | 4.0GB | 6GB | Best compromise between resources, speed and quality |
`WizardLM-7B.GGML.q4_3.bin` | q4_3 | 4bit | 4.8GB | 7GB | Maximum quality, higher RAM requirements and slower inference |
* The q4_0 file provides lower quality, but maximal compatibility. It will work with past and future versions of llama.cpp
* The q4_2 file offers the best combination of performance and quality. This format is still subject to change and there may be compatibility issues, see below.
* The q4_3 file offers the highest quality, at the cost of increased RAM usage and slower inference speed. This format is still subject to change and there may be compatibility issues, see below.
## q4_2 and q4_3 compatibility
q4_2 and q4_3 are new 4bit quantisation methods offering improved quality. However they are still under development and their formats are subject to change.
In order to use these files you will need to use recent llama.cpp code. And it's possible that future updates to llama.cpp could require that these files are re-generated.
If and when the q4_2 and q4_3 files no longer work with recent versions of llama.cpp I will endeavour to update them.
If you want to ensure guaranteed compatibility with a wide range of llama.cpp versions, use the q4_0 file.
# Original model info
Overview of Evol-Instruct
Evol-Instruct is a novel method using LLMs instead of humans to automatically mass-produce open-domain instructions of various difficulty levels and skills range, to improve the performance of LLMs.
![info](https://github.com/nlpxucan/WizardLM/raw/main/imgs/git_running.png) |