koala-7B-GPTQ / README.md
TheBloke's picture
Warning re GPTQ files not working
6a4336e
|
raw
history blame
No virus
1.6 kB

Koala: A Dialogue Model for Academic Research

This repo contains the weights of the Koala 7B model produced at Berkeley. It is the result of combining the diffs from https://huggingface.co/young-geng/koala with the original Llama 7B model.

This version has then been quantized to 4bit using https://github.com/qwopqwop200/GPTQ-for-LLaMa

WARNING: At the present time the GPTQ files uploaded here are producing garbage output. It is not recommended to use them.

I'm working on diagnosing this issue and producing working files.

Quantization command was:

python3 llama.py /content/koala-7B-HF c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save /content/koala-7B-4bit-128g.pt

Check out the following links to learn more about the Berkeley Koala model.

License

The model weights are intended for academic research only, subject to the model License of LLaMA, Terms of Use of the data generated by OpenAI, and Privacy Practices of ShareGPT. Any other usage of the model weights, including but not limited to commercial usage, is strictly prohibited.