MarsupialAI commited on
Commit
6f58e84
1 Parent(s): 4afe8eb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -3
README.md CHANGED
@@ -4,13 +4,20 @@ license_name: yi-other
4
  ---
5
  # Yeet 51b 200k
6
 
7
- This model is a rotating-stack merge of three Yi 34b 200k models in a 51b (90 layer) configuration. My reasoning behind this merge was twofold: I'd never seen a stacked merge made from 34b models, and I thought that maybe this could give near-70b performance but with a much larger context window, but still fitting within 48GB of VRAM. I think the results are quite good. The model does as well as many 70b models at RP/ERP, chat, and storywriting. At Q4_K_S it will fit into a pair of 24GB GPUs with 32k context. Coherency at 32k is excellent, and will probably remain very good well beyond that thanks to the 200k base training.
8
 
9
  The gotcha here is speed. While it inferences as you'd expect for the model size, it's much slower than a similarly-sized 8x7b MoE. And while I personally find the output of this model to outperform any mixtral finetune I've seen so far, those finetunes are getting better all the time, and this really is achingly slow with a lot of context. I'm getting less than half a token per second on a pair of P40s with a full 32k prompt.
10
 
11
  But that's not to say this model (or even the 51b stack concept) is useless. If you're patient, you can get extremely good output with very deep context on attainable hardware. There are undoubtedly niche scenarios where this model or similarly-constructed models might be ideal.
12
 
 
 
 
 
13
 
 
 
 
14
 
15
 
16
  # Sample output
@@ -38,8 +45,7 @@ In the end, it was just a simple story about a cute and fluffy bunny who venture
38
 
39
 
40
  # Prompt format
41
- Seems to have the strongest affinity for Alpaca prompts, but Vicuna works as well. Considering the variety of components, most
42
- formats will probbaly work to some extent.
43
 
44
 
45
  # WTF is a rotating-stack merge?
 
4
  ---
5
  # Yeet 51b 200k
6
 
7
+ This model is a rotating-stack merge of three Yi 34b 200k models in a 51b (90 layer) configuration. See My reasoning behind this merge was twofold: I'd never seen a stacked merge made from 34b models, and I thought that maybe this could give near-70b performance but with a much larger context window, but still fitting within 48GB of VRAM. I think the results are quite good. The model does as well as many 70b models at RP/ERP, chat, and storywriting. At Q4_K_S it will fit into a pair of 24GB GPUs with 32k context. Coherency at 32k is excellent, and will probably remain very good well beyond that thanks to the 200k base training.
8
 
9
  The gotcha here is speed. While it inferences as you'd expect for the model size, it's much slower than a similarly-sized 8x7b MoE. And while I personally find the output of this model to outperform any mixtral finetune I've seen so far, those finetunes are getting better all the time, and this really is achingly slow with a lot of context. I'm getting less than half a token per second on a pair of P40s with a full 32k prompt.
10
 
11
  But that's not to say this model (or even the 51b stack concept) is useless. If you're patient, you can get extremely good output with very deep context on attainable hardware. There are undoubtedly niche scenarios where this model or similarly-constructed models might be ideal.
12
 
13
+ Component models for the rotating stack are
14
+ - adamo1139/Yi-34B-200K-AEZAKMI-v2
15
+ - brucethemoose/Yi-34B-200K-DARE-megamerge-v8
16
+ - taozi555/RpBird-Yi-34B-200k
17
 
18
+ This model is uncensored and perfectly capable of generating objectionable material. However, it is not an explicitely-NSFW model, and it has never "gone rogue" and tried to insert NSFW content into SFW prompts in my experience. As with any LLM, no factual claims made by the model should be taken at face value. You know that boilerplate safety disclaimer that most professional models have? Assume this has it too. This model is for entertainment purposes only.
19
+
20
+ FP16 and Q4_K_S GGUFs are located here: https://huggingface.co/MarsupialAI/Yeet_51b_200k_GGUF_Q4KS_FP16
21
 
22
 
23
  # Sample output
 
45
 
46
 
47
  # Prompt format
48
+ Seems to work fine with Alpaca prompts. Considering the variety of components, other formats are likely to work to some extent.
 
49
 
50
 
51
  # WTF is a rotating-stack merge?