This repo contains the copy of the original quantized to FP8. Original: rAIfle/SorcererLM-8x22b-bf16
SorcererLM-8x22b-bf16
Oh boy, here we go. Low-rank (r=16, alpha=32
) 16bit-LoRA on top of WizardLM-2-8x22B, trained on 2 epochs of (cleaned & deduped) c2-logs. As far as I can tell, this is an upgrade from WizardLM-2-8x22B
for RP purposes.
Alongside this ready-to-use release I'm also releasing the LoRA itself as well as the earlier epoch1
-checkpoint of the LoRA.
Why A LoRA?
The choice was fully intentional. I briefly considered a FFT but for this particular use-case a LoRA seemed a better fit. WizardLM-2-8x22B
is smart by itself but its used vocabulary leaves much to be desired when it comes to RP. By training a low-rank LoRA on top of it to teach it some of Claude's writing style, we remedy that.
Prompting
- Use the templates in Quant-Cartel/Recommended-Settings under the
SorcererLM
-folder. - Or Vicuna 1.1 and a sane context template. It's somewhat sensitive to samplers, I'd recommend Temperature 1, MinP 0.05 and a dash of DRY but YMMV. Shorter prompts seem to work better, too.
Quantized Versions
Acknowledgments
The main shoutout I want to make is to my Cartel bros, Envoid and particularly I^2, for being amazing. I count this as a team effort, so they deserve kudos too if you like this.
Training
Trained using qlora-pipe. Configs included in the train
-subfolder.
Safety
... n/a
- Downloads last month
- 19
Model tree for CalamitousFelicitousness/SorcererLM-8x22b-FP8-Dynamic
Base model
alpindale/WizardLM-2-8x22B