metadata
language: en
license: llama2
library_name: transformers
tags:
- mergekit
- merge
- llama-2
datasets:
- mlabonne/CodeLlama-2-20k
inference: false
model_type: llama
pipeline_tag: text-generation
base_model:
- davzoku/cria-llama2-7b-v1.3
model-index:
- name: frankencria-llama2-11b-v1.3-m.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 52.82
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=davzoku/frankencria-llama2-11b-v1.3-m.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 77.5
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=davzoku/frankencria-llama2-11b-v1.3-m.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 48
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=davzoku/frankencria-llama2-11b-v1.3-m.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 46.87
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=davzoku/frankencria-llama2-11b-v1.3-m.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 71.59
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=davzoku/frankencria-llama2-11b-v1.3-m.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 15.01
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=davzoku/frankencria-llama2-11b-v1.3-m.1
name: Open LLM Leaderboard
FrankenCRIA v1.3-m.1
What is FrankenCRIA?
This is a frankenmerge of davzoku/cria-llama2-7b-v1.3.
The configuration is the same as Undi95/Mistral-11B-v0.1, mlabonne/FrankenBeagle14-11B and the DUS technique used in upstage/SOLAR-10.7B-v1.0.
Please be aware that this model is highly experimental, and no further training has been conducted following the merge. Therefore, the model performance may not meet expectations, as described in the SOLAR paper
📦 FrankenCRIA Model Release
FrankenCRIA v1.3 comes with several variants.
- davzoku/frankencria-llama2-11b-v1.3-m.1: 11B FrankenMerge inspired by Undi95/Mistral-11B-v0.1
- davzoku/frankencria-llama2-11b-v1.3-m.2: 12.5B interleaving FrankenMerge inspired by vilm/vinallama-12.5b-chat-DUS
🧩 Merge Details
Merge Method
This model was merged using the passthrough merge method.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model.
# https://huggingface.co/Undi95/Mistral-11B-v0.1
slices:
- sources:
- model: davzoku/cria-llama2-7b-v1.3
layer_range: [0, 24]
- sources:
- model: davzoku/cria-llama2-7b-v1.3
layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 51.96 |
AI2 Reasoning Challenge (25-Shot) | 52.82 |
HellaSwag (10-Shot) | 77.50 |
MMLU (5-Shot) | 48.00 |
TruthfulQA (0-shot) | 46.87 |
Winogrande (5-shot) | 71.59 |
GSM8k (5-shot) | 15.01 |