Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

gemma-7b-zephyr-sft - bnb 4bits

Original model description:

license: other library_name: transformers datasets: - HuggingFaceH4/ultrachat_200k base_model: google/gemma-7b license_name: gemma-terms-of-use license_link: https://ai.google.dev/gemma/terms model-index: - name: gemma-7b-zephyr-sft results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 61.43 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=wandb/gemma-7b-zephyr-sft name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 80.73 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=wandb/gemma-7b-zephyr-sft name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 60.33 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=wandb/gemma-7b-zephyr-sft name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 43.35 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=wandb/gemma-7b-zephyr-sft name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 74.19 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=wandb/gemma-7b-zephyr-sft name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 49.81 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=wandb/gemma-7b-zephyr-sft name: Open LLM Leaderboard

Visualize in Weights & Biases

Gemma 7B Zephyr SFT

The Zephyr SFT recipe applied on top of Gemma 7B

Model description

  • Model type: A 8.5B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets.
  • Language(s) (NLP): Primarily English
  • Finetuned from model: google/gemma-7b

Recipe

We trained using the alignment handbook recipe and logging to W&B

Visit the W&B workspace here

License

This model has the same license as the original Gemma model collection

Compute provided by Lambda Labs - 8xA100 80GB node

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 61.64
AI2 Reasoning Challenge (25-Shot) 61.43
HellaSwag (10-Shot) 80.73
MMLU (5-Shot) 60.33
TruthfulQA (0-shot) 43.35
Winogrande (5-shot) 74.19
GSM8k (5-shot) 49.81
Downloads last month
4
Safetensors
Model size
4.78B params
Tensor type
F32
FP16
U8
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.