v000000 commited on
Commit
b58939d
1 Parent(s): 2e1c556

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +59 -31
README.md CHANGED
@@ -3,52 +3,80 @@ library_name: transformers
3
  tags:
4
  - mergekit
5
  - merge
6
- - not-for-all-audiences
7
  - llama-cpp
8
  - gguf-my-repo
9
- base_model: v000000/MysticGem-v1.3-L2-13B
 
 
10
  ---
11
 
 
 
 
 
 
 
 
 
 
 
12
  # v000000/MysticGem-v1.3-L2-13B-Q4_K_M-GGUF
13
  This model was converted to GGUF format from [`v000000/MysticGem-v1.3-L2-13B`](https://huggingface.co/v000000/MysticGem-v1.3-L2-13B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
14
  Refer to the [original model card](https://huggingface.co/v000000/MysticGem-v1.3-L2-13B) for more details on the model.
15
 
16
- ## Use with llama.cpp
17
- Install llama.cpp through brew (works on Mac and Linux)
18
 
19
- ```bash
20
- brew install llama.cpp
21
 
22
- ```
23
- Invoke the llama.cpp server or the CLI.
24
 
25
- ### CLI:
26
- ```bash
27
- llama --hf-repo v000000/MysticGem-v1.3-L2-13B-Q4_K_M-GGUF --hf-file mysticgem-v1.3-l2-13b-q4_k_m.gguf -p "The meaning to life and the universe is"
28
- ```
29
 
30
- ### Server:
31
- ```bash
32
- llama-server --hf-repo v000000/MysticGem-v1.3-L2-13B-Q4_K_M-GGUF --hf-file mysticgem-v1.3-l2-13b-q4_k_m.gguf -c 2048
33
- ```
34
 
35
- Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
 
 
 
 
 
36
 
37
- Step 1: Clone llama.cpp from GitHub.
38
- ```
39
- git clone https://github.com/ggerganov/llama.cpp
40
- ```
41
 
42
- Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
43
- ```
44
- cd llama.cpp && LLAMA_CURL=1 make
45
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
46
 
47
- Step 3: Run inference through the main binary.
48
- ```
49
- ./main --hf-repo v000000/MysticGem-v1.3-L2-13B-Q4_K_M-GGUF --hf-file mysticgem-v1.3-l2-13b-q4_k_m.gguf -p "The meaning to life and the universe is"
50
- ```
51
- or
52
  ```
53
- ./server --hf-repo v000000/MysticGem-v1.3-L2-13B-Q4_K_M-GGUF --hf-file mysticgem-v1.3-l2-13b-q4_k_m.gguf -c 2048
 
 
 
 
 
 
 
 
 
 
 
54
  ```
 
3
  tags:
4
  - mergekit
5
  - merge
 
6
  - llama-cpp
7
  - gguf-my-repo
8
+ - not-for-all-audiences
9
+ - llama
10
+ base_model: v000000/l2-test-001
11
  ---
12
 
13
+ # MysticGem-v1.3-L2-13B l2-test-001
14
+
15
+ RP Model, pretty good result!
16
+
17
+ Probably final. Smart, novel, lewd etc etc.
18
+
19
+ ### Rank no.1 chaiverse for 13B
20
+
21
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f74b6e6389380c77562762/lKzERgJPnOxxzWsGJ-86M.png)
22
+
23
  # v000000/MysticGem-v1.3-L2-13B-Q4_K_M-GGUF
24
  This model was converted to GGUF format from [`v000000/MysticGem-v1.3-L2-13B`](https://huggingface.co/v000000/MysticGem-v1.3-L2-13B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
25
  Refer to the [original model card](https://huggingface.co/v000000/MysticGem-v1.3-L2-13B) for more details on the model.
26
 
27
+ # merge
 
28
 
29
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
 
30
 
31
+ ## Merge Details
32
+ ### Merge Method
33
 
34
+ This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
 
 
 
35
 
36
+ ### Models Merged
 
 
 
37
 
38
+ The following models were included in the merge:
39
+ * [KoboldAI/LLaMA2-13B-Erebus-v3](https://huggingface.co/KoboldAI/LLaMA2-13B-Erebus-v3)
40
+ * [Locutusque/Orca-2-13b-SFT-v4](https://huggingface.co/Locutusque/Orca-2-13b-SFT-v4)
41
+ * [Sao10K/Stheno-Inverted-1.2-L2-13B](https://huggingface.co/Sao10K/Stheno-Inverted-1.2-L2-13B)
42
+ * [Walmart-the-bag/MysticFusion-13B](https://huggingface.co/Walmart-the-bag/MysticFusion-13B)
43
+ * [Undi95/Amethyst-13B](https://huggingface.co/Undi95/Amethyst-13B)
44
 
45
+ ### Configuration
 
 
 
46
 
47
+ The following YAML configuration was used to produce this model:
48
+
49
+ ```yaml
50
+ models:
51
+ - model: Undi95/Amethyst-13B
52
+ parameters:
53
+ weight: 0.3
54
+ - model: Walmart-the-bag/MysticFusion-13B
55
+ parameters:
56
+ weight: 0.35
57
+ - model: Sao10K/Stheno-Inverted-1.2-L2-13B
58
+ parameters:
59
+ weight: 0.15
60
+ - model: KoboldAI/LLaMA2-13B-Erebus-v3
61
+ parameters:
62
+ weight: 0.1
63
+ - model: Locutusque/Orca-2-13b-SFT-v4
64
+ parameters:
65
+ weight: 0.1
66
+ merge_method: linear
67
+ dtype: bfloat16
68
 
 
 
 
 
 
69
  ```
70
+
71
+ ### Prompt Format (Alpaca):
72
+ ```bash
73
+ Below is an instruction that describes a task. Write a response that appropriately completes the request.
74
+
75
+ ### Instruction:
76
+ Take the role of {{char}} in a play where you leave a lasting impression on {{user}}. Never skip or gloss over {{char}}'s actions.
77
+
78
+ ### Instruction:
79
+ {prompt}
80
+ ### Response:
81
+ {output}
82
  ```