Edit model card

Model Card MELT-llama-2-3x70b-chat-hf

Medical Education Language Transformer (MELT)

Model Type:

The MELT-llama-2-3x70b-chat-hf Large Language Model (LLM) is a pretrained generative text model pre-trained and fine-tuned on using publically avalable medical data.

MELT-llama-2-3x70b-chat-hf demonstrated a average 26.3% improvement over llama-2-3x70b-chat-hf (MoE of 3 X llama-2-70b-chat-hf) across 3 USMLE, 3 USMLE, Indian AIIMS, and NEET medical examination benchmarks.

This is MoE model, thanks to Charles Goddard for code/tools.

Model Details

The Medical Education Language Transformer (MELT) models have been trained on a wide-range of text, chat, Q/A, and instruction data in the medical domain.

While the model was evaluated using publically avalable USMLE, Indian AIIMS, and NEET medical examination example questions, its use it intented to be more broadly applicable.

Model Description

Uses

MELT is intended for research purposes only. MELT models are best suited for prompts using a QA or chat format.

Out-of-Scope Use

MELT is intended for research purposes only and should not be used for medical advice.

Bias, Risks, and Limitations

MELT was training using collections publicly available, which likely contain biased and inaccurate information. The training and evaluation datasets have not been evaluated for content or accuracy.

How to Get Started with the Model

Use this model like you would the Mixtral-8x7B-Instruct-v0.1 model.

Training Details

Training Data

The following datasets were used for training:

Expert Med MedQA train MedMCQA train LiveQA MedicationQA MMLU clinical topics Medical Flashcards Wikidoc Wikidoc Patient Information MEDIQA MMMLU icliniq 10k HealthCare Magic 100k GenMedGPT-5k Mental Health Conversational

Training Procedure

Training Hyperparameters

  • Lora Rank: 64
  • Lora Alpha: 16
  • Lora Targets: "o_proj","down_proj","v_proj","gate_proj","up_proj","k_proj","q_proj"
  • LR: 2e-4
  • Epoch: 3
  • Precision: bf16

Evaluation

MELT-llama-2-3x70b-chat-hf demonstrated a average 26.3% improvement over llama-2-3x70b-chat-hf (MoE of 3 X llama-2-70b-chat-hf) across 3 USMLE, 3 USMLE, Indian AIIMS, and NEET medical examination benchmarks.

llama-2-3x70b-chat-hf

  • medqa: {'base': {'Average': 49.17, 'STEP-1': 47.35, 'STEP-2&3': 51.26}}
  • mausmle: {'base': {'Average': 54.48, 'STEP-1': 52.94, 'STEP-2': 58.62, 'STEP-3': 52.34}}
  • medmcqa: {'base': {'Average': 47.48, 'MEDICINE': 48.91, 'OPHTHALMOLOGY': 40.48, 'ANATOMY': 46.58, 'PATHOLOGY': 50.78, 'PHYSIOLOGY': 56.06, 'DENTAL': 40.76, 'RADIOLOGY': 50.0, 'BIOCHEMISTRY': 57.85, 'ANAESTHESIA': 30.43, 'GYNAECOLOGY': 45.1, 'PHARMACOLOGY': 57.3, 'SOCIAL': 44.44, 'PEDIATRICS': 52.27, 'ENT': 50.0, 'SURGERY': 50.4, 'MICROBIOLOGY': 42.47, 'FORENSIC': 55.81, 'PSYCHIATRY': 55.56, 'SKIN': 50.0, 'ORTHOPAEDICS': 57.14, 'UNKNOWN': 100.0}}
  • average: 50.37%

MELT-llama-2-3x70b-chat-hf

  • medqa: {'base': {'Average': 62.31, 'STEP-1': 60.91, 'STEP-2&3': 63.91}}
  • mausmle: {'base': {'Average': 70.61, 'STEP-1': 70.59, 'STEP-2': 66.67, 'STEP-3': 73.83}}
  • medmcqa: {'base': {'Average': 57.89, 'MEDICINE': 54.89, 'OPHTHALMOLOGY': 45.24, 'ANATOMY': 59.59, 'PATHOLOGY': 67.83, 'PHYSIOLOGY': 62.88, 'DENTAL': 49.76, 'RADIOLOGY': 71.43, 'BIOCHEMISTRY': 68.6, 'ANAESTHESIA': 65.22, 'GYNAECOLOGY': 54.25, 'PHARMACOLOGY': 66.85, 'SOCIAL': 52.22, 'PEDIATRICS': 63.64, 'ENT': 65.79, 'SURGERY': 59.68, 'MICROBIOLOGY': 50.68, 'FORENSIC': 67.44, 'PSYCHIATRY': 88.89, 'SKIN': 50.0, 'ORTHOPAEDICS': 64.29, 'UNKNOWN': 100.0}}
  • average: 63.6%

Testing Data, Factors & Metrics

Testing Data

MedQA test MedMCQA test MA USMLE

Disclaimer:

The use of large language models, such as this one, is provided without warranties or guarantees of any kind. While every effort has been made to ensure accuracy, completeness, and reliability of the information generated, it should be noted that these models may produce responses that are inaccurate, outdated, or inappropriate for specific purposes. Users are advised to exercise discretion and judgment when relying on the information generated by these models. The outputs should not be considered as professional, legal, medical, financial, or any other form of advice. It is recommended to seek expert advice or consult appropriate sources for specific queries or critical decision-making. The creators, developers, and providers of these models disclaim any liability for damages, losses, or any consequences arising from the use, reliance upon, or interpretation of the information provided by these models. The user assumes full responsibility for their interactions and usage of the generated content. By using these language models, users agree to indemnify and hold harmless the developers, providers, and affiliates from any claims, damages, or liabilities that may arise from their use. Please be aware that these models are constantly evolving, and their capabilities, limitations, and outputs may change over time without prior notice. Your use of this language model signifies your acceptance and understanding of this disclaimer.

Downloads last month
1
Safetensors
Model size
182B params
Tensor type
BF16
·