MarsupialAI commited on
Commit
d5d0a1d
1 Parent(s): 1cdc9ea

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -13
README.md CHANGED
@@ -5,29 +5,28 @@ tags:
5
  - rp
6
  - erp
7
  - chat
8
- - storywriting
9
  ---
10
 
11
  # Melusine 103b
12
 
13
  ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/65a531bc7ec6af0f95c707b1/NmlQi3OggdYF_Tyb_cs3D.jpeg)
14
 
15
- This model is a rotating-stack merge of three 70b models in a 103b (120 layer) configuration inspired by Venus 103b. The result of
16
- this "frankenmerge" is a large model that contains a little bit of everything. RP, chat, storywriting,
17
- and instruct are all well supported.
18
 
19
  Component models for the rotating stack are
20
- - miqudev/miqu-1-70b
21
- - royallab/Aetheria-L2-70B
22
- - lizpreciatior/lzlv_70b_fp16_hf
23
 
24
- This model is *mostly* uncensored and is capable of generating objectionable material with a suitable prompt. However it is not an explicitely-NSFW model,
25
- and some remnants of Miqu's censoring do occasionally pop up. As with any LLM, no factual claims
26
- made by the model should be taken at face value. You know that boilerplate safety disclaimer that most professional models have?
27
- Assume this has it too. This model is for entertainment purposes only.
28
 
29
 
30
- FP16 and Q4_K_S GGUFs:
31
 
32
 
33
  # Sample output
@@ -40,7 +39,7 @@ Write a detailed and humorous story about a cute and fluffy bunny that goes to a
40
 
41
 
42
  # Prompt format
43
- Seems to have the strongest affinity for Alpaca prompts. Others will work to some extent.
44
 
45
 
46
  # WTF is a rotating-stack merge?
 
5
  - rp
6
  - erp
7
  - chat
8
+ - miqu
9
  ---
10
 
11
  # Melusine 103b
12
 
13
  ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/65a531bc7ec6af0f95c707b1/NmlQi3OggdYF_Tyb_cs3D.jpeg)
14
 
15
+ This model is a rotating-stack merge of three 70b models in a 103b (120 layer) configuration inspired by Venus 103b. All components
16
+ are miqu-based, and the result appears to retain the long-context capabilities of the base model.
 
17
 
18
  Component models for the rotating stack are
19
+ - ShinojiResearch/Senku-70B-Full
20
+ - Undi95/Miqu-70B-Alpaca-DPO
21
+ - alchemonaut/QuartetAnemoi-70B-t0.0001
22
 
23
+ This model is *mostly* de-censored and is capable of generating objectionable material. Depending on prompts, remnants of the original
24
+ censorship may pop up. Due to some of the constituent parts, extremely objectionable material may also be generated under certain
25
+ circumstances. As with any LLM, no factual claims made by the model should be taken at face value. You know that boilerplate safety
26
+ disclaimer that most professional models have? Assume this has it too. This model is for entertainment purposes only.
27
 
28
 
29
+ GGUFs:
30
 
31
 
32
  # Sample output
 
39
 
40
 
41
  # Prompt format
42
+ Seems to have the strongest affinity for Alpaca and ChatML prompts.
43
 
44
 
45
  # WTF is a rotating-stack merge?