Llama-3.1-MedIT-SUN-8B

Model Description

Llama-3.1-MedIT-SUN-8B is an experimental language model that leverages model merging techniques to combine the capabilities of multiple foundation models. This 8B parameter model is built upon the Llama-3.1-8B-Instruct architecture and represents an exploration in model fusion methodologies.

Key Features

  • Base Architecture: Meta's Llama-3.1-8B-Instruct
  • Parameter Count: 8 billion
  • Development: Created by MedIT Solutions
  • Merged Components:
    • arcee-ai/Llama-3.1-SuperNova-Lite
    • meta-llama/Llama-3.1-8B-Instruct

Technical Details

The model utilizes the proprietary MedIT-mesh technique for model merging, demonstrating an experimental approach to combining language models. This implementation serves as a proof of concept and testing ground for model fusion methodologies.

Purpose

This model was developed primarily for testing and research purposes, exploring the potential of model merging techniques in language model development. It should be considered an experimental release rather than a production-ready model.

Usage Notes

As this is a test model, it is recommended for research and experimental purposes only. Users should be aware of its experimental nature when considering it for any applications.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 30.04
IFEval (0-Shot) 78.37
BBH (3-Shot) 32.00
MATH Lvl 5 (4-Shot) 20.02
GPQA (0-shot) 7.83
MuSR (0-shot) 9.64
MMLU-PRO (5-shot) 32.40
Downloads last month
1,302
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for meditsolutions/Llama-3.1-MedIT-SUN-8B

Finetuned
(15)
this model
Finetunes
1 model
Merges
6 models
Quantizations
1 model

Evaluation results