Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,32 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
pipeline_tag: text-generation
|
3 |
+
tags:
|
4 |
+
- llama
|
5 |
+
- ggml
|
6 |
+
---
|
7 |
+
|
8 |
+
**Quantization from:**
|
9 |
+
[Tap-M/Luna-AI-Llama2-Uncensored](https://huggingface.co/Tap-M/Luna-AI-Llama2-Uncensored)
|
10 |
+
|
11 |
+
**Converted to the GGML format with:**
|
12 |
+
[llama.cpp master-294f424 (JUL 19, 2023)](https://github.com/ggerganov/llama.cpp/releases/tag/master-294f424)
|
13 |
+
|
14 |
+
**Tested with:**
|
15 |
+
[koboldcpp 1.35](https://github.com/LostRuins/koboldcpp/releases/tag/v1.35)
|
16 |
+
|
17 |
+
**Example usage:**
|
18 |
+
```
|
19 |
+
koboldcpp.exe Luna-AI-Llama2-Uncensored-ggmlv3.Q2_K --threads 6 --stream --smartcontext --unbantokens --noblas
|
20 |
+
```
|
21 |
+
|
22 |
+
**Tested with the following format (refer to the original model and [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca) for details:**
|
23 |
+
```
|
24 |
+
### Instruction:
|
25 |
+
You're a digital assistant designed to provide helpful and accurate responses to the user.
|
26 |
+
|
27 |
+
### Input:
|
28 |
+
{input}
|
29 |
+
|
30 |
+
### Response:
|
31 |
+
```
|
32 |
+
|