|
--- |
|
license: apache-2.0 |
|
--- |
|
|
|
# ggml versions of OpenLLaMa 7B v2 |
|
|
|
For use with [llama.cpp](https://github.com/ggerganov/llama.cpp). |
|
|
|
- Version: [version 2 final 1T tokens](https://github.com/openlm-research/open_llama#07072023) |
|
- Project: [OpenLLaMA: An Open Reproduction of LLaMA](https://github.com/openlm-research/open_llama) |
|
- Model: [openlm-research/open_llama_7b_v2](https://huggingface.co/openlm-research/open_llama_7b_v2) |
|
- llama.cpp 4,5,8-bit quantization: build 567(2d5db48) or later |
|
- llama.cpp newer quantization formats: build 616(99009e7) or later |
|
|
|
## Perplexity |
|
|
|
Coming soon... |
|
|