|
--- |
|
license: llama2 |
|
pipeline_tag: text-generation |
|
--- |
|
<!-- description start --> |
|
## Description |
|
Converted to f16 using llama_cpp convert.py script, then quantized to q6_K using quantize from the same llama_cpp repository.<br> |
|
Resulting file was split into 2 parts. <br><br> |
|
**Note**: HF does not support uploading files larger than 50GB.<br> |
|
<!-- description end --> |
|
### File require joining |
|
To join the files, do the following: <br> |
|
cat codellama-70b-python-q6_K.gguf-split-* > codellama-70b-python-q6_K.gguf && rm codellama-70b-python-6_K.gguf-split-* |