Gemmalpaca-7B / README.md
mlabonne's picture
Update README.md
5df2f68 verified
|
raw
history blame
3.85 kB
metadata
library_name: transformers
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
  To access Gemma on Hugging Face, you’re required to review and agree to
  Google’s usage license. To do this, please ensure you’re logged-in to Hugging
  Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
license: other
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
base_model:
  - google/gemma-7b
datasets:
  - vicgalle/alpaca-gpt4

image/webp

Gemmalpaca-7B

This is gemma-7b model supervised fine-tuned on the vicgalle/alpaca-gpt4 dataset. It outperforms gemma-7b-it, Google's chat version, on Nous' benchmark suite.

It's mostly a test to see how fine-tuning works with Gemma models on a well-known dataset.

🔍 Applications

This model has a context length of 8k. I recommend using it with the Alpaca chat template and NOT the Gemma Instruct template (works perfectly with LM Studio). You also want to add </s> as a stop token.

🏆 Evaluation

Nous

Gemmalpaca-7B outperforms gemma-7b and gemma-7b-it on Nous' benchmark suite (evaluation performed using LLM AutoEval). See the entire leaderboard here.

Model Average AGIEval GPT4All TruthfulQA Bigbench
mlabonne/Gemmalpaca-7B 📄 34.45 21.6 40.87 44.85 30.49
google/gemma-2b 📄 34.26 22.7 43.35 39.96 31.03
google/gemma-7b 📄 33.56 20.64 38.49 46.61 28.51
google/gemma-7b-it 📄 33.53 21.33 40.84 41.7 30.25

🧩 Configuration

It was trained using Axolotl with the following configuration.

base_model: alpindale/gemma-7b
model_type: AutoModelForCausalLM
tokenizer_config: philschmid/gemma-tokenizer-chatml
tokenizer_type: AutoTokenizer
tokenizer_use_fast: true

load_in_8bit: false
load_in_4bit: true
strict: false

datasets:
  - path: vicgalle/alpaca-gpt4
    type: alpaca

dataset_prepared_path:
val_set_size: 0.01
output_dir: ./out

sequence_len: 2048
sample_packing: true
pad_to_sequence_len: true

adapter: qlora
lora_model_dir:
lora_r: 32
lora_alpha: 64
lora_dropout: 0.05
lora_target_linear: true

wandb_project: axolotl
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:

gradient_accumulation_steps: 2
micro_batch_size: 4
num_epochs: 3
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0002

train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false

gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true

warmup_steps: 10
evals_per_epoch: 10
eval_table_size:
eval_table_max_new_tokens: 128
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.1
fsdp:
fsdp_config:
special_tokens:

Built with Axolotl