Edit model card

medLlama-3-8B_DARE

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the DARE TIES merge method using mlabonne/ChimeraLlama-3-8B-v3 as a base.

Models Merged

The following models were included in the merge:

Evaluation

  • multimedq (0 shot)
Tasks Version Filter n-shot Metric Value Stderr
- medmcqa Yaml none 0 acc 0.5728 ± 0.0076
none 0 acc_norm 0.5728 ± 0.0076
- medqa_4options Yaml none 0 acc 0.5923 ± 0.0138
none 0 acc_norm 0.5923 ± 0.0138
- anatomy (mmlu) 0 none 0 acc 0.7111 ± 0.0392
- clinical_knowledge (mmlu) 0 none 0 acc 0.7547 ± 0.0265
- college_biology (mmlu) 0 none 0 acc 0.7917 ± 0.0340
- college_medicine (mmlu) 0 none 0 acc 0.6647 ± 0.0360
- medical_genetics (mmlu) 0 none 0 acc 0.8200 ± 0.0386
- professional_medicine (mmlu) 0 none 0 acc 0.7426 ± 0.0266
stem N/A none 0 acc_norm 0.5773 ± 0.0067
none 0 acc 0.6145 ± 0.0057
- pubmedqa 1 none 0 acc 0.7400 ± 0.0196
Groups Version Filter n-shot Metric Value Stderr
stem N/A none 0 acc_norm 0.5773 ± 0.0067
none 0 acc 0.6145 ± 0.0057

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: mlabonne/ChimeraLlama-3-8B-v3
    # No parameters necessary for base model

  - model: sethuiyer/Medichat-Llama3-8B
    parameters:
      density: 0.53
      weight: 0.5
  - model: johnsnowlabs/JSL-MedLlama-3-8B-v2.0
    parameters:
      density: 0.53
      weight: 0.5
      
merge_method: dare_ties
base_model: mlabonne/ChimeraLlama-3-8B-v3
parameters:
  int8_mask: true
dtype: float16
Downloads last month
1,801
Safetensors
Model size
8.03B params
Tensor type
FP16
·
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Merge of

Space using ChenWeiLi/MedLlama-3-8B_DARE 1

Collection including ChenWeiLi/MedLlama-3-8B_DARE