xxx777xxxASD commited on
Commit
8e11fd9
1 Parent(s): 25b5572

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +24 -0
README.md CHANGED
@@ -1,3 +1,27 @@
1
  ---
2
  license: llama3
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: llama3
3
  ---
4
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f5e51289c121cb864ba464/jgyhmI451GRXri5hEj3lh.png)
5
+ (Maybe i'll change the waifu picture later)
6
+
7
+ Experimental RP-oriented MoE, the idea was to get a model that would be equal to or better than the Mixtral 8x7B and it's finetunes in RP/ERP tasks.
8
+
9
+ ### ChaoticSoliloquy-4x8B
10
+ ```
11
+ base_model: jeiku_Chaos_RP_l3_8B
12
+ gate_mode: random
13
+ dtype: bfloat16
14
+ experts_per_token: 2
15
+ experts:
16
+ - source_model: ChaoticNeutrals_Poppy_Porpoise-v0.6-L3-8B
17
+ - source_model: jeiku_Chaos_RP_l3_8B
18
+ - source_model: openlynn_Llama-3-Soliloquy-8B
19
+ - source_model: Sao10K_L3-Solana-8B-v1
20
+ ```
21
+
22
+ ## Models used
23
+
24
+ - [ChaoticNeutrals/Poppy_Porpoise-v0.6-L3-8B](https://huggingface.co/ChaoticNeutrals/Poppy_Porpoise-v0.6-L3-8B)
25
+ - [jeiku/Chaos_RP_l3_8B](https://huggingface.co/jeiku/Chaos_RP_l3_8B)
26
+ - [openlynn/Llama-3-Soliloquy-8B](https://huggingface.co/openlynn/Llama-3-Soliloquy-8B)
27
+ - [Sao10K/L3-Solana-8B-v1](https://huggingface.co/Sao10K/L3-Solana-8B-v1)