qCammel-70-x / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
aef5ecc
|
raw
history blame
2.47 kB
metadata
license: other
language:
  - en
pipeline_tag: text-generation
inference: false
tags:
  - pytorch
  - llama
  - llama-2
  - qCammel-70
library_name: transformers

qCammel-70

qCammel-70 is a fine-tuned version of Llama-2 70B model, trained on a distilled dataset of 15,000 instructions using QLoRA. This model is optimized for academic medical knowledge and instruction-following capabilities.

Model Details

Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the website and accept their License before downloading this model .

The fine-tuning process applied to qCammel-70 involves a distilled dataset of 15,000 instructions and is trained with QLoRA,

Variations The original Llama 2 has parameter sizes of 7B, 13B, and 70B. This is the fine-tuned version of the 70B model.

Input Models input text only.

Output Models generate text only.

Model Architecture qCammel-70 is based on the Llama 2 architecture, an auto-regressive language model that uses a decoder only transformer architecture.

License A custom commercial license is available at: https://ai.meta.com/resources/models-and-libraries/llama-downloads/ Llama 2 is licensed under the LLAMA 2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved

Research Papers

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 58.32
ARC (25-shot) 68.34
HellaSwag (10-shot) 87.87
MMLU (5-shot) 70.18
TruthfulQA (0-shot) 57.47
Winogrande (5-shot) 84.29
GSM8K (5-shot) 29.72
DROP (3-shot) 10.34