v000000 commited on
Commit
d3a21f3
1 Parent(s): 697ca63

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +52 -39
README.md CHANGED
@@ -15,42 +15,55 @@ base_model: v000000/SyntheticMoist-11B-v2
15
  This model was converted to GGUF format from [`v000000/SyntheticMoist-11B-v2`](https://huggingface.co/v000000/SyntheticMoist-11B-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
16
  Refer to the [original model card](https://huggingface.co/v000000/SyntheticMoist-11B-v2) for more details on the model.
17
 
18
- ## Use with llama.cpp
19
- Install llama.cpp through brew (works on Mac and Linux)
20
-
21
- ```bash
22
- brew install llama.cpp
23
-
24
- ```
25
- Invoke the llama.cpp server or the CLI.
26
-
27
- ### CLI:
28
- ```bash
29
- llama --hf-repo v000000/SyntheticMoist-11B-v2-Q6_K-GGUF --hf-file syntheticmoist-11b-v2-q6_k.gguf -p "The meaning to life and the universe is"
30
- ```
31
-
32
- ### Server:
33
- ```bash
34
- llama-server --hf-repo v000000/SyntheticMoist-11B-v2-Q6_K-GGUF --hf-file syntheticmoist-11b-v2-q6_k.gguf -c 2048
35
- ```
36
-
37
- Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
38
-
39
- Step 1: Clone llama.cpp from GitHub.
40
- ```
41
- git clone https://github.com/ggerganov/llama.cpp
42
- ```
43
-
44
- Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
45
- ```
46
- cd llama.cpp && LLAMA_CURL=1 make
47
- ```
48
-
49
- Step 3: Run inference through the main binary.
50
- ```
51
- ./main --hf-repo v000000/SyntheticMoist-11B-v2-Q6_K-GGUF --hf-file syntheticmoist-11b-v2-q6_k.gguf -p "The meaning to life and the universe is"
52
- ```
53
- or
54
- ```
55
- ./server --hf-repo v000000/SyntheticMoist-11B-v2-Q6_K-GGUF --hf-file syntheticmoist-11b-v2-q6_k.gguf -c 2048
56
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  This model was converted to GGUF format from [`v000000/SyntheticMoist-11B-v2`](https://huggingface.co/v000000/SyntheticMoist-11B-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
16
  Refer to the [original model card](https://huggingface.co/v000000/SyntheticMoist-11B-v2) for more details on the model.
17
 
18
+ ### SyntheticMoist-v2
19
+ RP Model, Solar.
20
+ Higher density+LimaRP led to better performance, Alpaca/Vicuna.
21
+
22
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f74b6e6389380c77562762/nipwgMMVVEWvJqN3TSWZP.png)
23
+
24
+ # merge
25
+
26
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
27
+
28
+ ## Merge Details
29
+ ### Merge Method
30
+
31
+ This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [Sao10K/Fimbulvetr-11B-v2](https://huggingface.co/Sao10K/Fimbulvetr-11B-v2) as a base.
32
+
33
+ ### Models Merged
34
+
35
+ The following models were included in the merge:
36
+ * [TheDrummer/Moistral-11B-v3](https://huggingface.co/TheDrummer/Moistral-11B-v3)
37
+ * [Himitsui/MedMitsu-Instruct-11B](https://huggingface.co/Himitsui/MedMitsu-Instruct-11B)
38
+ * [Himitsui/Kaiju-11B](https://huggingface.co/Himitsui/Kaiju-11B)
39
+ * [migtissera/Synthia-v3.0-11B](https://huggingface.co/migtissera/Synthia-v3.0-11B) + [jeiku/Re-Host_Limarp_Mistral](https://huggingface.co/jeiku/Re-Host_Limarp_Mistral)
40
+
41
+ ### Configuration
42
+
43
+ The following YAML configuration was used to produce this model:
44
+
45
+ ```yaml
46
+ models:
47
+ - model: Himitsui/MedMitsu-Instruct-11B
48
+ parameters:
49
+ weight: 0.13
50
+ density: 0.60
51
+ - model: Himitsui/Kaiju-11B
52
+ parameters:
53
+ weight: 0.22
54
+ density: 0.73
55
+ - model: migtissera/Synthia-v3.0-11B+jeiku/Re-Host_Limarp_Mistral
56
+ parameters:
57
+ weight: 0.28
58
+ density: 0.80
59
+ - model: TheDrummer/Moistral-11B-v3
60
+ parameters:
61
+ weight: 0.37
62
+ density: 0.85
63
+ merge_method: dare_ties
64
+ base_model: Sao10K/Fimbulvetr-11B-v2
65
+ parameters:
66
+ int8_mask: true
67
+ dtype: bfloat16
68
+
69
+ ```