|
--- |
|
license: apache-2.0 |
|
--- |
|
|
|
## Introduce |
|
|
|
Quantizing the [NTQAI/Nxcode-CQ-7B-orpo](https://huggingface.co/TIGER-Lab/StructLM-7B-Mistral) to f16, q2, q3, q4, q5, q6 and q8 with Llama.cpp. |
|
|
|
|
|
## Prompt Template |
|
|
|
``` |
|
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. |
|
|
|
|
|
### Instruction: |
|
{instruction} |
|
|
|
{input} |
|
|
|
{question} |
|
|
|
### Response: |
|
``` |
|
|
|
IMPORTANT!! - For more details, check out [StructLM-7B-Mistral](https://huggingface.co/TIGER-Lab/StructLM-7B-Mistral). |
|
|
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/62f1c1b9a39cf6f5c63f029a/q7QN1DZSycPx-gv9Peaw8.png) |