Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

GGUF Quants with iMatrix for : https://huggingface.co/TomGrc/FusionNet_7Bx2_MoE_v0.1

The second version of a hard-benching model, but without excessive overfit : the perplexity is a bit high, but still drops at longer context before stabilizing, which means that the model is really usable. As far as I know, that's the smartest compromise among the 7bx2 Mistral amateur MOE on the 1st february 2024.

Llama CPP Benchs :

  • FusionNet_7Bx2_MoE_v0.1-b1924-Q8_0.gguf,-,Hellaswag,89,400,2024-02-03 00:00:00,,MOE_14b,Mistral_v0.2,8192,,,GGUF,TomGrc,Nexesenex,
  • FusionNet_7Bx2_MoE_v0.1-b1924-Q8_0.gguf,-,Hellaswag_Bin,85,400,2024-02-03 00:00:00,,MOE_14b,Mistral_v0.2,8192,,,GGUF,TomGrc,Nexesenex,
  • FusionNet_7Bx2_MoE_v0.1-b1924-Q8_0.gguf,-,Arc-Challenge,61.20401338,,299,2024-02-03 05:40:00,,MOE_14b,Mistral_v0.2,8192,,,GGUF,TomGrc,Nexesenex,
  • FusionNet_7Bx2_MoE_v0.1-b1924-Q8_0.gguf,-,Arc-Easy,76.31578947,,570,2024-02-03 05:40:00,,MOE_14b,Mistral_v0.2,8192,,,GGUF,TomGrc,Nexesenex,
  • FusionNet_7Bx2_MoE_v0.1-b1924-Q8_0.gguf,-,MMLU,44.40433213,,277,2024-02-03 05:40:00,,MOE_14b,Mistral_v0.2,8192,,,GGUF,TomGrc,Nexesenex,
  • FusionNet_7Bx2_MoE_v0.1-b1924-Q8_0.gguf,-,Thruthful-QA,51.77478580,,817,2024-02-03 05:40:00,,MOE_14b,Mistral_v0.2,8192,,,GGUF,TomGrc,Nexesenex,
  • FusionNet_7Bx2_MoE_v0.1-b1924-Q8_0.gguf,-,Winogrande,83.5833,,1267,2024-02-03 05:40:00,,MOE_14b,Mistral_v0.2,8192,,,GGUF,TomGrc,Nexesenex,
  • FusionNet_7Bx2_MoE_v0.1-b1924-Q8_0.gguf,-,wikitext,6.7506,512,512,2024-02-03 00:00:00,,MOE_14b,Mistral_v0.2,8192,,,GGUF,TomGrc,Nexesenex,
  • FusionNet_7Bx2_MoE_v0.1-b1924-Q8_0.gguf,-,wikitext,5.4589,4096,4096,2024-02-03 00:00:00,,MOE_14b,Mistral_v0.2,8192,,,GGUF,TomGrc,Nexesenex,
  • FusionNet_7Bx2_MoE_v0.1-b1924-Q8_0.gguf,-,wikitext,5.3443,6144,6144,2024-02-03 00:00:00,,MOE_14b,Mistral_v0.2,8192,,,GGUF,TomGrc,Nexesenex,
  • FusionNet_7Bx2_MoE_v0.1-b1924-Q8_0.gguf,-,wikitext,5.5125,7168,7168,2024-02-03 00:00:00,,MOE_14b,Mistral_v0.2,8192,,,GGUF,TomGrc,Nexesenex,
  • FusionNet_7Bx2_MoE_v0.1-b1924-Q8_0.gguf,-,wikitext,5.2504,8192,8192,2024-02-03 00:00:00,,MOE_14b,Mistral_v0.2,8192,,,GGUF,TomGrc,Nexesenex,
  • FusionNet_7Bx2_MoE_v0.1-b1924-Q8_0.gguf,-,wikitext,8.9976,10240,10240,2024-02-03 00:00:00,,MOE_14b,Mistral_v0.2,8192,,,GGUF,TomGrc,Nexesenex,
  • FusionNet_7Bx2_MoE_v0.1-b1924-Q8_0.gguf,-,wikitext,43.9996,12288,12288,2024-02-03 00:00:00,,MOE_14b,Mistral_v0.2,8192,,,GGUF,TomGrc,Nexesenex,
  • FusionNet_7Bx2_MoE_v0.1-b1924-Q8_0.gguf,-,wikitext,780.5238,16384,16384,2024-02-03 00:00:00,,MOE_14b,Mistral_v0.2,8192,,,GGUF,TomGrc,Nexesenex,
Downloads last month
89
GGUF
Model size
12.9B params
Architecture
llama
Unable to determine this model's library. Check the docs .

Collection including Nexesenex/TomGrc_FusionNet_7Bx2_MoE_v0.1-iMat.GGUF