Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,29 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
pipeline_tag: text-generation
|
3 |
+
tags:
|
4 |
+
- llama
|
5 |
+
- ggml
|
6 |
+
---
|
7 |
+
|
8 |
+
**Quantization from:**
|
9 |
+
[TokenBender/llama2-7b-chat-hf-codeCherryPop-qLoRA-merged](https://huggingface.co/TokenBender/llama2-7b-chat-hf-codeCherryPop-qLoRA-merged)
|
10 |
+
|
11 |
+
**Converted to the GGML format with:**
|
12 |
+
[llama.cpp master-b5fe67f (JUL 22, 2023)](https://github.com/ggerganov/llama.cpp/releases/tag/master-b5fe67f)
|
13 |
+
|
14 |
+
**Tested with:**
|
15 |
+
[koboldcpp 1.36](https://github.com/LostRuins/koboldcpp/releases/tag/v1.36)
|
16 |
+
|
17 |
+
**Example usage:**
|
18 |
+
```
|
19 |
+
koboldcpp.exe llama2-7b-chat-hf-codeCherryPop-qLoRA-merged-ggmlv3.Q6_K.bin --threads 6 --contextsize 4096 --stream --smartcontext --unbantokens --ropeconfig 1.0 10000 --noblas
|
20 |
+
```
|
21 |
+
|
22 |
+
**Tested with the following format (refer to the original model and [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca) for additional details):**
|
23 |
+
```
|
24 |
+
### Instruction:
|
25 |
+
{code request}
|
26 |
+
|
27 |
+
### Response:
|
28 |
+
```
|
29 |
+
|