language:
- en
license: apache-2.0
datasets:
- ehartford/dolphin
- jondurbin/airoboros-2.2.1
- ehartford/dolphin-coder
- teknium/openhermes
- ise-uiuc/Magicoder-OSS-Instruct-75K
- ise-uiuc/Magicoder-Evol-Instruct-110K
- LDJnr/Capybara
model-index:
- name: dolphin-2.6-mistral-7b-dpo-5.93B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 38.99
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Mihaiii/dolphin-2.6-mistral-7b-dpo-5.93B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 61.01
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Mihaiii/dolphin-2.6-mistral-7b-dpo-5.93B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 27.32
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Mihaiii/dolphin-2.6-mistral-7b-dpo-5.93B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 53.51
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Mihaiii/dolphin-2.6-mistral-7b-dpo-5.93B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 62.67
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Mihaiii/dolphin-2.6-mistral-7b-dpo-5.93B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 0.23
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Mihaiii/dolphin-2.6-mistral-7b-dpo-5.93B
name: Open LLM Leaderboard
This is a pruned version of cognitivecomputations/dolphin-2.6-mistral-7b-dpo from 7.24B params to 5.93B params (~ 82%).
Steps to replicate:
Use laserQlora.ipynb from cognitivecomputations/laserRMT to determine which layers should be eliminated.
Replace model_name = "mistralai/Mistral-7B-v0.1"
with model_name = "cognitivecomputations/dolphin-2.6-mistral-7b-dpo"
.
I also ran the script only for self_attn.v_proj
(so change the script to layer_types=["self_attn.v_proj"]
)
Order by snr descending and eliminate top layers using mergekit. The threshold for elimination is up to you, depeding on how many layers you want removed. I decided to remove 6 layers (indexes: 3, 5, 16, 18, 19, 24 )
Here is the mergekit config:
slices:
- sources:
- model: "cognitivecomputations/dolphin-2.6-mistral-7b-dpo"
layer_range: [0, 3]
- sources:
- model: "cognitivecomputations/dolphin-2.6-mistral-7b-dpo"
layer_range: [4, 5]
- sources:
- model: "cognitivecomputations/dolphin-2.6-mistral-7b-dpo"
layer_range: [6, 16]
- sources:
- model: "cognitivecomputations/dolphin-2.6-mistral-7b-dpo"
layer_range: [17, 18]
- sources:
- model: "cognitivecomputations/dolphin-2.6-mistral-7b-dpo"
layer_range: [20, 24]
- sources:
- model: "cognitivecomputations/dolphin-2.6-mistral-7b-dpo"
layer_range: [25, 32]
merge_method: passthrough
dtype: bfloat16
The model outputted by mergekit with this configuration is this model (dolphin-2.6-mistral-7b-dpo-5.93B).
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 40.62 |
AI2 Reasoning Challenge (25-Shot) | 38.99 |
HellaSwag (10-Shot) | 61.01 |
MMLU (5-Shot) | 27.32 |
TruthfulQA (0-shot) | 53.51 |
Winogrande (5-shot) | 62.67 |
GSM8k (5-shot) | 0.23 |