metadata
library_name: transformers
tags:
- merge
- llama-3.1
- roleplay
- function calling
base_model:
- unsloth/Meta-Llama-3.1-8B-Instruct
- REILX/Llama-3-8B-Instruct-750Mb-lora
datasets:
- databricks/databricks-dolly-15k
- microsoft/orca-math-word-problems-200k
- LooksJuicy/ruozhiba
base_model_relation: merge
model-index:
- name: KRONOS-8B-V1-P3
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: wis-k/instruction-following-eval
split: train
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 71.37
name: averaged accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=T145%2FKRONOS-8B-V1-P3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: SaylorTwift/bbh
split: test
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 30.27
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=T145%2FKRONOS-8B-V1-P3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: lighteval/MATH-Hard
split: test
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 18.35
name: exact match
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=T145%2FKRONOS-8B-V1-P3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
split: train
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 1.34
name: acc_norm
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=T145%2FKRONOS-8B-V1-P3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 5.96
name: acc_norm
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=T145%2FKRONOS-8B-V1-P3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 26.72
name: accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=T145%2FKRONOS-8B-V1-P3
name: Open LLM Leaderboard
KRONOS V1 P3
This is a merge of Meta Llama 3.1 Instruct and REILIX's "750MB" LORA, created using llm-tools.
The primary purpose of this model is to be merged into other models in the same family using the TIES merge method.
Creating quants for this is entirely unnecessary.
Merge Details
Configuration
The following Bash command was used to produce this model:
python /llm-tools/merge-lora.py -m unsloth/Meta-Llama-3.1-8B-Instruct -l REILX/Llama-3-8B-Instruct-750Mb-lora
Open LLM Leaderboard Evaluation Results
Detailed results can be found here! Summarized results can be found here!
Metric | Value (%) |
---|---|
Average | 25.67 |
IFEval (0-Shot) | 71.37 |
BBH (3-Shot) | 30.27 |
MATH Lvl 5 (4-Shot) | 18.35 |
GPQA (0-shot) | 1.34 |
MuSR (0-shot) | 5.96 |
MMLU-PRO (5-shot) | 26.72 |