|
--- |
|
library_name: transformers |
|
base_model: |
|
- nbeerbower/llama-3-bophades-v3-8B |
|
datasets: |
|
- jondurbin/gutenberg-dpo-v0.1 |
|
license: other |
|
license_name: llama3 |
|
--- |
|
|
|
# llama-3-gutenberg-8B |
|
|
|
This model is based on Llama-3-8b, and is governed by [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](LICENSE) |
|
|
|
[nbeerbower/llama-3-bophades-v3-8B](https://huggingface.co/nbeerbower/llama-3-bophades-v3-8B) finetuned on [jondurbin/gutenberg-dpo-v0.1](https://huggingface.co/datasets/jondurbin/gutenberg-dpo-v0.1). |
|
|
|
### Method |
|
|
|
Finetuned using an A100 on Google Colab. |
|
|
|
[Fine-Tune Your Own Llama 2 Model in a Colab Notebook](https://mlabonne.github.io/blog/posts/Fine_Tune_Your_Own_Llama_2_Model_in_a_Colab_Notebook.html) |
|
|
|
### Configuration |
|
|
|
Dataset preparation, system prompt: |
|
|
|
```python |
|
def chatml_format(example): |
|
|
|
# Format instruction |
|
prompt = "<|im_start|>user\n" + example['prompt'] + "<|im_end|>\n<|im_start|>assistant\n" |
|
|
|
# Format chosen answer |
|
chosen = example['chosen'] + "<|im_end|>\n" |
|
|
|
# Format rejected answer |
|
rejected = example['rejected'] + "<|im_end|>\n" |
|
|
|
return { |
|
"prompt": prompt, |
|
"chosen": chosen, |
|
"rejected": rejected, |
|
} |
|
|
|
dataset = load_dataset("jondurbin/gutenberg-dpo-v0.1")['train'] |
|
|
|
# Save columns |
|
original_columns = dataset.column_names |
|
|
|
# Tokenizer |
|
tokenizer = AutoTokenizer.from_pretrained(model_name) |
|
tokenizer.pad_token = tokenizer.eos_token |
|
tokenizer.padding_side = "left" |
|
|
|
# Format dataset |
|
dataset = dataset.map( |
|
chatml_format, |
|
remove_columns=original_columns |
|
) |
|
``` |
|
|
|
LoRA, model, and training settings: |
|
|
|
```python |
|
# LoRA configuration |
|
peft_config = LoraConfig( |
|
r=16, |
|
lora_alpha=16, |
|
lora_dropout=0.05, |
|
bias="none", |
|
task_type="CAUSAL_LM", |
|
target_modules=['k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj'] |
|
) |
|
|
|
# Model to fine-tune |
|
model = AutoModelForCausalLM.from_pretrained( |
|
model_name, |
|
torch_dtype=torch.bfloat16, |
|
load_in_4bit=True |
|
) |
|
model.config.use_cache = False |
|
|
|
# Reference model |
|
ref_model = AutoModelForCausalLM.from_pretrained( |
|
model_name, |
|
torch_dtype=torch.bfloat16, |
|
load_in_4bit=True |
|
) |
|
|
|
# Training arguments |
|
training_args = TrainingArguments( |
|
per_device_train_batch_size=2, |
|
gradient_accumulation_steps=2, |
|
gradient_checkpointing=True, |
|
learning_rate=2e-5, |
|
lr_scheduler_type="cosine", |
|
max_steps=1000, |
|
save_strategy="no", |
|
logging_steps=1, |
|
output_dir=new_model, |
|
optim="paged_adamw_32bit", |
|
warmup_steps=100, |
|
bf16=True, |
|
report_to="wandb", |
|
) |
|
|
|
# Create DPO trainer |
|
dpo_trainer = DPOTrainer( |
|
model, |
|
ref_model, |
|
args=training_args, |
|
train_dataset=dataset, |
|
tokenizer=tokenizer, |
|
peft_config=peft_config, |
|
beta=0.1, |
|
max_prompt_length=1024, |
|
max_length=1536, |
|
force_use_ref_model=True |
|
) |
|
``` |