jphme commited on
Commit
87295bc
1 Parent(s): 75c3d1c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +68 -0
README.md CHANGED
@@ -1,3 +1,71 @@
1
  ---
 
2
  license: cc-by-nc-sa-4.0
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ inference: false
3
  license: cc-by-nc-sa-4.0
4
+ language:
5
+ - de
6
+ - en
7
+ library_name: transformers
8
+ pipeline_tag: text-generation
9
  ---
10
+
11
+ # Orca Mini v2 German 7b GGML
12
+
13
+ These files are GGML format model files for [Orca Mini v2 German 7b](https://huggingface.co/jphme/orca_mini_v2_ger_7b). Please find all information about the model in the original repository.
14
+
15
+
16
+ GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
17
+ * [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
18
+ * [KoboldCpp](https://github.com/LostRuins/koboldcpp)
19
+ * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
20
+ * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
21
+ * [ctransformers](https://github.com/marella/ctransformers)
22
+
23
+
24
+ ## Prompt template:
25
+
26
+ ```
27
+ ### System:
28
+ You are an AI assistant that follows instruction extremely well. Help as much as you can.
29
+
30
+ ### User:
31
+ prompt
32
+
33
+ ### Response:
34
+ ```
35
+
36
+ ## Compatibility
37
+
38
+ ### `q4_0`
39
+
40
+ So far, I only quantized a `q4_0` version for my own use. Please let me know if there is demand for other quantizations.
41
+ These should be compatbile with any UIs, tools and libraries released since late May.
42
+
43
+ ## Provided files
44
+ | Name | Quant method | Bits | Size | Max RAM required | Use case |
45
+ | ---- | ---- | ---- | ---- | ---- | ----- |
46
+ | orca-mini-v2-ger-7b.ggmlv3.q4_0.bin | q4_0 | 4 | 3.83 GB | ~6.3 GB | Original llama.cpp quant method, 4-bit. |
47
+
48
+ **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
49
+
50
+ ## How to run in `llama.cpp`
51
+
52
+ I use the following command line; adjust for your tastes and needs:
53
+
54
+ ```
55
+ ./main -t 10 -ngl 32 -m orca-mini-v2-ger-7b.ggmlv3.q4_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### System:\nYou are an story writing assistant who writes very long, detailed and interesting stories\n\n### User:\nWrite a story about llamas\n\n### Response:\n"
56
+ ```
57
+ If you're able to use full GPU offloading, you should use `-t 1` to get best performance.
58
+
59
+ If not able to fully offload to GPU, you should use more cores. Change `-t 10` to the number of physical CPU cores you have, or a lower number depending on what gives best performance.
60
+
61
+ Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
62
+
63
+ If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
64
+
65
+ ## How to run in `text-generation-webui`
66
+
67
+ Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
68
+
69
+ ## Thanks
70
+
71
+ Special thanks to [Pankaj Mathur](https://huggingface.co/psmathur) for the great Orca Mini base model and [TheBloke](https://huggingface.co/TheBloke) for his great work quantizing billions of models (and for his template for this README).