2024-05-15 update

#8
by froggeric - opened

Added:

  • WizardLM-2-8x22B
  • llmixer/BigWeave-v16-103b
  • mistralai/Mixtral-8x22B-Instruct-v0.1
  • Undi95/Miqu-MS-70B
  • KatyTheCutie/EstopianMaid-13B
  • meta-llama/Meta-Llama-3-70B-Instruct
  • a few personal experiments on miqu self-merges using attenuation

benchmark-results.png

@froggeric hi! Thanks for doing these creativity benchmarks. I feel that these were sorely needed by the community ever since Wolfram stopped doing his roleplaying tests. Just wanted to suggest a few models to be added to the list in case they aren't already on your radar:

Tiny:

  • Sao10K/Fimbulvetr-11B-v2 (many people's favorite tiny RP model)

Small:

  • cognitivecomputations/dolphin-2.9.1-yi-1.5-34b (new release)
  • ParasiticRogue/Merged-RP-Stew-V2-34B (impressive RP model inspired by brucethemoose's merges)
  • AetherResearch/Cerebrum-1.0-8x7b (smartest 8x7B I've used)

Medium:

  • abacusai/Smaug-Llama-3-70B-Instruct (new L3 model that claims to improve on the base model)
  • ShinojiResearch/Senku-70B-Full (supposedly improves on base Miqu)
  • alchemonaut/QuartetAnemoi-70B-t0.0001 (my favorie Miqu RP model before Midnight Miqu)

Sign up or log in to comment