Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

GGUF quants with iMatrix for the following model : https://huggingface.co/ChuckMcSneed/WinterGoddess-1.4x-70b-32k

The quants will come slowly, i7-6700k..

Itself based on Sao10K's WinterGoddess.

This WinterGoddess 32k is best at 16k, unlike its predecessor in the spirit ( https://huggingface.co/Nexesenex/WinterGoddess-1.4x-limarpv3-70B-L2-32k-Requant.GGUF ) which is best at 8-10k.

LlamaCPP Benchs :

  • WinterGoddess-1.4x-70b-32k-b2081-Q3_K_M.gguf,-,Hellaswag,86,400,,2024-02-06 00:00:00,PEC4,70b,Llama_2,4096,,,GGUF,ChuckMcSneed,Nexesenex,
  • WinterGoddess-1.4x-70b-32k-b2081-Q3_K_M.gguf,-,Hellaswag,86.1,1000,,2024-02-06 00:00:00,PEC4,70b,Llama_2,4096,,,GGUF,ChuckMcSneed,Nexesenex,
  • WinterGoddess-1.4x-70b-32k-b2081-Q3_K_M.gguf,-,Arc-Challenge,55.18394649,,299,2024-02-06 05:40:00,PEC4,70b,Llama_2,4096,,,GGUF,ChuckMcSneed,Nexesenex,
  • WinterGoddess-1.4x-70b-32k-b2081-Q3_K_M.gguf,-,Arc-Easy,74.56140351,,570,2024-02-06 05:40:00,PEC4,70b,Llama_2,4096,,,GGUF,ChuckMcSneed,Nexesenex,
  • WinterGoddess-1.4x-70b-32k-b2081-Q3_K_M.gguf,-,MMLU,46.64536741,,313,2024-02-06 05:40:00,PEC4,70b,Llama_2,4096,,,GGUF,ChuckMcSneed,Nexesenex,
  • WinterGoddess-1.4x-70b-32k-b2081-Q3_K_M.gguf,-,Thruthful-QA,40.51407589,19.8590,817,2024-02-06 05:40:00,PEC4,70b,Llama_2,4096,,,GGUF,ChuckMcSneed,Nexesenex,
  • WinterGoddess-1.4x-70b-32k-b2081-Q3_K_M.gguf,-,Winogrande,79.9526,,1267,2024-02-06 05:40:00,PEC4,70b,Llama_2,4096,,,GGUF,ChuckMcSneed,Nexesenex,
  • WinterGoddess-1.4x-70b-32k-b2081-Q3_K_M.gguf,-,wikitext,4.5512,512,512,2024-02-06 00:00:00,PEC8,70b,Llama_2,4096,,,GGUF,ChuckMcSneed,Nexesenex,81
  • WinterGoddess-1.4x-70b-32k-b2081-Q3_K_M.gguf,-,wikitext,4.3786,512,512,2024-02-06 00:00:00,PEC4,70b,Llama_2,4096,,,GGUF,ChuckMcSneed,Nexesenex,81
  • WinterGoddess-1.4x-70b-32k-b2081-Q3_K_M.gguf,-,wikitext,4.0049,512,512,2024-02-06 00:00:00,PEC4,70b,Llama_2,4096,,,GGUF,ChuckMcSneed,Nexesenex,655
Downloads last month
258
GGUF
Model size
69B params
Architecture
llama
Unable to determine this model's library. Check the docs .