koala-7B-GPTQ / README.md
TheBloke's picture
Update README.md
de3eb73
|
raw
history blame
2.95 kB
metadata
license: other

Koala: A Dialogue Model for Academic Research

This repo contains the weights of the Koala 7B model produced at Berkeley. It is the result of combining the diffs from https://huggingface.co/young-geng/koala with the original Llama 7B model.

This version has then been quantized to 4bit using https://github.com/qwopqwop200/GPTQ-for-LLaMa

For the unquantized model in HF format, see this repo: https://huggingface.co/TheBloke/koala-7B-HF For the unquantized model in GGML format for llama.cpp, see this repo: https://huggingface.co/TheBloke/koala-7b-ggml-unquantized

WARNING: At the present time the GPTQ files uploaded here seem to be producing garbage output. It is not recommended to use them.

I'm working on diagnosing this issue. If you manage to get the files working, please let me know!

Quantization command was:

python3 llama.py /content/koala-7B-HF c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save /content/koala-7B-4bit-128g.pt

The Koala delta weights were originally merged using the following commands, producing koala-7B-HF:

git clone https://github.com/young-geng/EasyLM

git clone https://huggingface.co/nyanko7/LLaMA-7B

git clone https://huggingface.co/young-geng/koala koala_diffs

cd EasyLM

PYTHON_PATH="${PWD}:$PYTHONPATH" python \
-m EasyLM.models.llama.convert_torch_to_easylm \
--checkpoint_dir=/content/LLaMA-7B \
--output_file=/content/llama-7B-LM \
--streaming=True

PYTHON_PATH="${PWD}:$PYTHONPATH" python \
-m EasyLM.scripts.diff_checkpoint --recover_diff=True \
--load_base_checkpoint='params::/content/llama-7B-LM' \
--load_target_checkpoint='params::/content/koala_diffs/koala_7b_diff_v2' \
--output_file=/content/koala_7b.diff.weights \
--streaming=True

PYTHON_PATH="${PWD}:$PYTHONPATH" python \
-m EasyLM.models.llama.convert_easylm_to_hf --model_size=7b \
--output_dir=/content/koala-7B-HF \
--load_checkpoint='params::/content/koala_7b.diff.weights' \
--tokenizer_path=/content/LLaMA-7B/tokenizer.model

Check out the following links to learn more about the Berkeley Koala model.

License

The model weights are intended for academic research only, subject to the model License of LLaMA, Terms of Use of the data generated by OpenAI, and Privacy Practices of ShareGPT. Any other usage of the model weights, including but not limited to commercial usage, is strictly prohibited.