Quants Thanks to @Lewdiculous :https://huggingface.co/Lewdiculous/Prima-LelantaclesV4-7b-16k-GGUF
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
slices:
- sources:
- model: Test157t/Yarncules-7b-128k
layer_range: [0, 32]
- model: Test157t/Prima-LelantaclesV3-7b
layer_range: [0, 32]
merge_method: slerp
base_model: Test157t/Prima-LelantaclesV3-7b
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 68.28 |
AI2 Reasoning Challenge (25-Shot) | 66.04 |
HellaSwag (10-Shot) | 85.07 |
MMLU (5-Shot) | 64.70 |
TruthfulQA (0-shot) | 54.76 |
Winogrande (5-shot) | 80.27 |
GSM8k (5-shot) | 58.83 |
- Downloads last month
- 2,571
Invalid base_model specified in model card
metadata. Needs to be a model id from
hf.co/models.
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard66.040
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard85.070
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard64.700
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard54.760
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard80.270
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard58.830