Benchs

#1
by Nexesenex - opened

Here are my benchs for the Q3_K_S quant of this model :

llama2-70b-longlora-rope8-32k.Q3_K_S.gguf,-,Hellaswag,84.50000000,,400,2024-01-31 01:40:00,PEC8,70b,Mistral_Medium,32768,,,GGUF,Yukang,grimulkan,
llama2-70b-longlora-rope8-32k.Q3_K_S.gguf,-,Hellaswag,84.4,,1000,2024-01-31 01:40:00,PEC8,70b,Mistral_Medium,32768,,,GGUF,Yukang,grimulkan,
llama2-70b-longlora-rope8-32k.Q3_K_S.gguf,-,Hellaswag_Bin,79.75,,400,2024-01-31 01:40:00,PEC8,70b,Mistral_Medium,32768,,,GGUF,Yukang,grimulkan,
llama2-70b-longlora-rope8-32k.Q3_K_S.gguf,-,Hellaswag_Bin,82.8,,1000,2024-01-31 01:40:00,PEC8,70b,Mistral_Medium,32768,,,GGUF,Yukang,grimulkan,
llama2-70b-longlora-rope8-32k.Q3_K_S.gguf,-,Arc-Challenge,43.47826087,,299,2024-01-31 05:40:00,PEC8,70b,Mistral_Medium,32768,,,GGUF,Yukang,grimulkan,
llama2-70b-longlora-rope8-32k.Q3_K_S.gguf,-,Arc-Easy,65.96491228,,570,2024-01-31 05:40:00,PEC8,70b,Mistral_Medium,32768,,,GGUF,Yukang,grimulkan,
llama2-70b-longlora-rope8-32k.Q3_K_S.gguf,-,MMLU,44.72843450,,313,2024-01-31 05:40:00,PEC8,70b,Mistral_Medium,32768,,,GGUF,Yukang,grimulkan,
llama2-70b-longlora-rope8-32k.Q3_K_S.gguf,-,Thruthful-QA,28.27417381,,817,2024-01-31 05:40:00,PEC8,70b,Mistral_Medium,32768,,,GGUF,Yukang,grimulkan,
llama2-70b-longlora-rope8-32k.Q3_K_S.gguf,-,Winogrande,75.7695,,1267,2024-01-31 05:40:00,PEC8,70b,Mistral_Medium,32768,,,GGUF,Yukang,grimulkan,
llama2-70b-longlora-rope8-32k.Q3_K_S.gguf,-,wikitext,128.7588,512,512,2024-01-31 01:40:00,PEC2,70b,Mistral_Medium,32768,,,GGUF,Yukang,grimulkan,81
llama2-70b-longlora-rope8-32k.Q3_K_S.gguf,-,wikitext,9.9775,512,512,2024-01-31 01:40:00,PEC2.5,Mistral_Medium,32768,,,GGUF,Yukang,grimulkan,81
llama2-70b-longlora-rope8-32k.Q3_K_S.gguf,-,wikitext,4.1192,512,512,2024-01-31 01:40:00,PEC8,70b,Mistral_Medium,32768,,,GGUF,Yukang,grimulkan,655
llama2-70b-longlora-rope8-32k.Q3_K_S.gguf,-,wikitext,3.4525,4096,4096,2024-01-31 01:40:00,PEC8,70b,Mistral_Medium,32768,,,GGUF,Yukang,grimulkan,81
llama2-70b-longlora-rope8-32k.Q3_K_S.gguf,-,wikitext,3.3829,6144,6144,2024-01-31 01:40:00,PEC8,70b,Mistral_Medium,32768,,,GGUF,Yukang,grimulkan,54

Here are the 70b 3-bit quants in one graph:

image.png

LongLORA base seems not that great to start with. Now that we know Miqu is Mistral-Medium, we know may not get it to fine-tune, unfortunately. But I wonder if ABF is a better approach to long-context.

A pity Codellama sucks, otherwise that might tell us more about ABF.

Is the Wintergoddess this one? If so, that was done using the same LongLORA base. It holds its own over the base, and contradicts my claim that longLORA is not the best base.

Nope, that's not my model. it seems to be this one. The original is gone.

Ah, that mystery one. I forgot that story. Well, maybe longLORA does suck then...

LongLORA base seems not that great to start with. Now that we know Miqu is Mistral-Medium, we know may not get it to fine-tune, unfortunately. But I wonder if ABF is a better approach to long-context.

Huh did I miss something? The guy said it's an early alpha version (sure feels like one too). It's not mistral medium (or Mistral is lying).

An early alpha of medium I guess I should have said. A llama2 FT as many had speculated - which is the relevant part for this discussion. It shows what might be possible with good data selection and rope scaling method and existing Llama2.

image.png
Got my extended WinterGoddess tested on open llm leaderboard. Seems like extending to 32k kills gsm8k and mmlu. Losses here seem to be far less brutal than on my own bench, where almost half of performance is lost:
image.png

Cool. Pity we can't test Nexesenex/WinterGoddess-1.4x-limarpv3-70B-L2-32k-Requant.GGUF in the open leaderboard. Maybe with back-conversion to fp16 like how they did with Miqu.

I am changing my priority and experimenting with ABF. Will look to release a base (non-instruct-tuned) model with that soon, and hopefully a LORA too that can be merged into existing models. Would be interesting to see if it kills performance like the longLORA does.

For the guys with the know-how, the source to dequant and make a fp16 is that Q4_K_S quant : https://huggingface.co/mishima/WinterGoddess-1.4x-limarpv3-70B-L2-32k.GGUF

Sign up or log in to comment