|
--- |
|
license: other |
|
--- |
|
# superhot-13b-8k-no-rlhf-test-GGML |
|
|
|
**Note: LLAMA_ROPE_SCALE from PR [#1967](https://github.com/ggerganov/llama.cpp/pull/1967) needs to be set to 0.25** |
|
|
|
Merged base LLaMA and LoRA with this: |
|
https://github.com/tloen/alpaca-lora |
|
|
|
Base LLaMA 13B: |
|
https://huggingface.co/huggyllama/llama-13b |
|
|
|
SuperHOT 13B 8k no-rlhf-test LoRA: |
|
https://huggingface.co/kaiokendev/superhot-13b-8k-no-rlhf-test |
|
|
|
``` sh |
|
BASE_MODEL=huggyllama_llama-13b LORA=kaiokendev_superhot-13b-8k-no-rlhf-test python export_hf_checkpoint.py |
|
``` |
|
|
|
Converted and quantized with llama.cpp commit `447ccbe`: |
|
|
|
``` sh |
|
python convert.py superhot-13b-8k-safetensors --outtype f32 --outfile superhot-13b-8k-no-rlhf-test.ggmlv3.f32.bin |
|
./bin/quantize superhot-13b-8k-no-rlhf-test.ggmlv3.f32.bin superhot-13b-8k-no-rlhf-test.ggmlv3.Q2_K.bin Q2_K |
|
``` |