DavidAU commited on
Commit
d8bb4b4
1 Parent(s): b0dc8b8

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +54 -0
README.md ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - Safetensors
5
+ - mistral
6
+ - text-generation-inference
7
+ - merge
8
+ - 7b
9
+ - mistralai/Mistral-7B-Instruct-v0.1
10
+ - athirdpath/NSFW_DPO_Noromaid-7b
11
+ - transformers
12
+ - safetensors
13
+ - text-generation
14
+ - en
15
+ - dataset:athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW-v2
16
+ - dataset:athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW
17
+ - license:cc-by-nc-4.0
18
+ - autotrain_compatible
19
+ - endpoints_compatible
20
+ - has_space
21
+ - region:us
22
+ - llama-cpp
23
+ - gguf-my-repo
24
+ ---
25
+
26
+ # DavidAU/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1-Q6_K-GGUF
27
+ This model was converted to GGUF format from [`MaziyarPanahi/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1`](https://huggingface.co/MaziyarPanahi/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
28
+ Refer to the [original model card](https://huggingface.co/MaziyarPanahi/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1) for more details on the model.
29
+ ## Use with llama.cpp
30
+
31
+ Install llama.cpp through brew.
32
+
33
+ ```bash
34
+ brew install ggerganov/ggerganov/llama.cpp
35
+ ```
36
+ Invoke the llama.cpp server or the CLI.
37
+
38
+ CLI:
39
+
40
+ ```bash
41
+ llama-cli --hf-repo DavidAU/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1-Q6_K-GGUF --model nsfw_dpo_noromaid-7b-mistral-7b-instruct-v0.1.Q6_K.gguf -p "The meaning to life and the universe is"
42
+ ```
43
+
44
+ Server:
45
+
46
+ ```bash
47
+ llama-server --hf-repo DavidAU/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1-Q6_K-GGUF --model nsfw_dpo_noromaid-7b-mistral-7b-instruct-v0.1.Q6_K.gguf -c 2048
48
+ ```
49
+
50
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
51
+
52
+ ```
53
+ git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m nsfw_dpo_noromaid-7b-mistral-7b-instruct-v0.1.Q6_K.gguf -n 128
54
+ ```