Spaces:
Running
on
CPU Upgrade
14B model detected as 7B
I've been working on merging a 14 billion parameter model recently, but when it comes time to evaluate the model, the system indicates that the model has only 7 billion parameter instead of the expected 14 billion. It's funny when the top 7 billion model is actually 14 billion
When you filter 7-8B size on the spaces, more than 10+ model is actually 14B
There are quite a few models in the leaderboard where the indicated size is half the actual size:
- maldv/Qwentile2.5-32B-Instruct
- CultriX/Qwen2.5-14B-Wernickev3
...and many others, most of them Qwen-derived.
Hi! Thanks for the report!
We extract the number of parameters from the safetensors files automatically, in theory -
@alozowski
will be able to investigate why there is a mismatch when she comes back from vacations
For the difference between the comparator and leaderboard, make sure you compare either raw or normalised scores on both (we have 2 ways to compute scores, it should be explained in the FAQ)
there are some models by sometimesanotion
all of those models are deleted/unavailable
some of the request files:
Qwen2.5-14B-Vimarckoso-v3-model_stock
Lamarck-14B-v0.6-model_stock
Qwen2.5-14B-Vimarckoso-v3-Prose01
Qwentinuum-14B-v5
and there are a dozen or so more similar
Would a way to: automatically flag a model for closer/manual inspection, when its model name and auto-detected size differ significantly - would that help much?