IceSakeV8RP-7b / README.md
icefog72's picture
Update README.md
424c600 verified
---
license: cc-by-nc-4.0
library_name: transformers
tags:
- mergekit
- merge
- alpaca
- mistral
- not-for-all-audiences
- nsfw
model-index:
- name: IceSakeV8RP-7b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 60.86
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=icefog72/IceSakeV8RP-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 28.97
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=icefog72/IceSakeV8RP-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 5.66
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=icefog72/IceSakeV8RP-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 3.47
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=icefog72/IceSakeV8RP-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 8.54
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=icefog72/IceSakeV8RP-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 22.34
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=icefog72/IceSakeV8RP-7b
name: Open LLM Leaderboard
---
# IceSakeV8RP-7b
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
> This is model only for merges!
>
> Final model [IceSakeRP-7b](https://huggingface.co/icefog72/IceSakeRP-7b)
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* IceLemonTea-IceCoffeRP-7b
* IceSakeV7RP-7b
* IceLatteRP-7b
* IceSakeV6RP-7b
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: IceLemonTea-IceCoffeRP-7b
layer_range: [0, 32]
- model: IceSakeV7RP-7b
layer_range: [0, 32]
merge_method: slerp
base_model: IceLemonTea-IceCoffeRP-7b
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
dtype: float16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_icefog72__IceSakeV8RP-7b)
| Metric |Value|
|-------------------|----:|
|Avg. |21.64|
|IFEval (0-Shot) |60.86|
|BBH (3-Shot) |28.97|
|MATH Lvl 5 (4-Shot)| 5.66|
|GPQA (0-shot) | 3.47|
|MuSR (0-shot) | 8.54|
|MMLU-PRO (5-shot) |22.34|