Steelskull commited on
Commit
8b0f2ad
1 Parent(s): 740d59f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +69 -109
README.md CHANGED
@@ -10,129 +10,89 @@ tags:
10
  - Yhyu13/LMCocktail-10.7B-v1
11
  ---
12
 
13
- # Umbra-v2-MoE-4x10.7
14
 
15
- Umbra-v2-MoE-4x10.7 is a Mixure of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
16
- * [vicgalle/CarbonBeagle-11B](https://huggingface.co/vicgalle/CarbonBeagle-11B)
17
- * [Sao10K/Fimbulvetr-10.7B-v1](https://huggingface.co/Sao10K/Fimbulvetr-10.7B-v1)
18
- * [bn22/Nous-Hermes-2-SOLAR-10.7B-MISALIGNED](https://huggingface.co/bn22/Nous-Hermes-2-SOLAR-10.7B-MISALIGNED)
19
- * [Yhyu13/LMCocktail-10.7B-v1](https://huggingface.co/Yhyu13/LMCocktail-10.7B-v1)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
 
21
  ## 🧩 Configuration
22
 
23
- ```yamlbase_model: vicgalle/CarbonBeagle-11B
 
24
  gate_mode: hidden
25
  dtype: bfloat16
26
-
27
  experts:
28
  - source_model: vicgalle/CarbonBeagle-11B
29
- positive_prompts:
30
- - "versatile"
31
- - "adaptive"
32
- - "comprehensive"
33
- - "integrated"
34
- - "balanced"
35
- - "all-rounder"
36
- - "flexible"
37
- - "wide-ranging"
38
- - "multi-disciplinary"
39
- - "holistic"
40
- - "innovative"
41
- - "eclectic"
42
- - "resourceful"
43
- - "dynamic"
44
- - "robust"
45
-
46
- negative_prompts:
47
- - "narrow"
48
- - "specialized"
49
- - "limited"
50
- - "focused"
51
 
52
  - source_model: Sao10K/Fimbulvetr-10.7B-v1
53
- positive_prompts:
54
- - "creative"
55
- - "storytelling"
56
- - "expressive"
57
- - "imaginative"
58
- - "engaging"
59
- - "verbose"
60
- - "narrative"
61
- - "descriptive"
62
- - "elaborate"
63
- - "fictional"
64
- - "artistic"
65
- - "vivid"
66
- - "colorful"
67
- - "fantastical"
68
- - "lyrical"
69
-
70
- negative_prompts:
71
- - "sorry"
72
- - "I cannot"
73
- - "factual"
74
- - "concise"
75
- - "straightforward"
76
- - "objective"
77
- - "dry"
78
 
79
  - source_model: bn22/Nous-Hermes-2-SOLAR-10.7B-MISALIGNED
80
- positive_prompts:
81
- - "intelligent"
82
- - "analytical"
83
- - "accurate"
84
- - "knowledgeable"
85
- - "logical"
86
- - "data-driven"
87
- - "scientific"
88
- - "rational"
89
- - "precise"
90
- - "methodical"
91
- - "empirical"
92
- - "systematic"
93
- - "efficient"
94
- - "scholarly"
95
- - "statistical"
96
- - "calculate"
97
- - "compute"
98
- - "solve"
99
- - "work"
100
- - "python"
101
- - "javascript"
102
- - "programming"
103
- - "algorithm"
104
- - "tell me"
105
- - "assistant"
106
-
107
- negative_prompts:
108
- - "creative"
109
- - "imaginative"
110
- - "abstract"
111
- - "emotional"
112
- - "artistic"
113
- - "speculative"
114
 
115
  - source_model: Yhyu13/LMCocktail-10.7B-v1
116
- positive_prompts:
117
- - "instructive"
118
- - "verbose"
119
- - "descriptive"
120
- - "clear"
121
- - "detailed"
122
- - "informative"
123
- - "explanatory"
124
- - "elucidative"
125
- - "articulate"
126
- - "comprehensive"
127
- - "educational"
128
- - "thorough"
129
- - "specific"
130
- - "clarifying"
131
- - "structured"
132
-
133
- negative_prompts:
134
- - "concise"
135
- - "vague"```
136
 
137
  ## 💻 Usage
138
 
 
10
  - Yhyu13/LMCocktail-10.7B-v1
11
  ---
12
 
13
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64545af5ec40bbbd01242ca6/hen3fNHRD7BCPvd2KkfjZ.png)
14
 
15
+ # Umbra-v2.1-MoE-4x10.7
16
+
17
+ Umbra is an off shoot of the [Lumosia Series] with a Focus in General Knowledge and RP/ERP
18
+
19
+ Umbra v2.1 has updated models and a set of revamped positive and negative prompts.
20
+
21
+ This model was built around the idea someone wanted a General Assiatant that could also tell Stories/RP/ERP when wanted.
22
+
23
+ This is a very experimental model. It's a combination MoE of Solar models, the models selected are personal favorites.
24
+
25
+ base context is 4k but it stays coherent up to 16k
26
+
27
+ Please let me know how the model works for you.
28
+
29
+ A Umbra Personality tavern card has been added to the files.
30
+
31
+ Update:
32
+ Umbra-v2 had a token error fixed with Umbra-v2.1
33
+
34
+
35
+ ```
36
+ ### System:
37
+
38
+ ### USER:{prompt}
39
+
40
+ ### Assistant:
41
+ ```
42
+
43
+ Settings:
44
+ ```
45
+ Temp: 1.0
46
+ min-p: 0.02-0.1
47
+ ```
48
+
49
+ ## Evals:
50
+
51
+ posted soon:
52
+
53
+ * Avg:
54
+ * ARC:
55
+ * HellaSwag:
56
+ * MMLU:
57
+ * T-QA:
58
+ * Winogrande:
59
+ * GSM8K:
60
+
61
+ ## Examples:
62
+ ```
63
+ posted soon
64
+ ```
65
+ ```
66
+ posted soon
67
+ ```
68
 
69
  ## 🧩 Configuration
70
 
71
+ ```
72
+ base_model: vicgalle/CarbonBeagle-11B
73
  gate_mode: hidden
74
  dtype: bfloat16
 
75
  experts:
76
  - source_model: vicgalle/CarbonBeagle-11B
77
+ positive_prompts: [Revamped]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
78
 
79
  - source_model: Sao10K/Fimbulvetr-10.7B-v1
80
+ positive_prompts: [Revamped]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
81
 
82
  - source_model: bn22/Nous-Hermes-2-SOLAR-10.7B-MISALIGNED
83
+ positive_prompts: [Revamped]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
84
 
85
  - source_model: Yhyu13/LMCocktail-10.7B-v1
86
+ positive_prompts: [Revamed]
87
+ ```
88
+ ```
89
+ Umbra-v2-MoE-4x10.7 is a Mixure of Experts (MoE) made with the following models:
90
+ * [vicgalle/CarbonBeagle-11B](https://huggingface.co/vicgalle/CarbonBeagle-11B)
91
+ * [Sao10K/Fimbulvetr-10.7B-v1](https://huggingface.co/Sao10K/Fimbulvetr-10.7B-v1)
92
+ * [bn22/Nous-Hermes-2-SOLAR-10.7B-MISALIGNED](https://huggingface.co/bn22/Nous-Hermes-2-SOLAR-10.7B-MISALIGNED)
93
+ * [Yhyu13/LMCocktail-10.7B-v1](https://huggingface.co/Yhyu13/LMCocktail-10.7B-v1)
94
+
95
+ ```
 
 
 
 
 
 
 
 
 
 
96
 
97
  ## 💻 Usage
98