Edit model card

Llama-3-13B-Instruct-ft

This is a QLoRA finetune of a merge of pre-trained language models created using mergekit.

The model is based on my passthrough merge of Llama-3-13B-Instruct

This was primarily an experiment to see how a passthrough merge will respond to further finetuning, though this was done on a small dataset.

The goal was to make a "mid" sized model like Meta has released in the past and the merge method was inspired by mlabonne's Llama-3-120B.

The model was finetuned on 8192 context length and is likely reliable using RoPE up to 32k.

It still cannot do math reliably; neither can Llama-3-8B, and in my tests only Llama-3-70B passes basic arithmetic, but it is a better storywriter/RP than Llama-3-8B from some side by side testing I conducted.

Further finetuning this model or finetuning the base model on more samples is encouraged.

Datasets

A small dataset was used to see how it affects performance. Originally I planned to do a larger dataset (196k samples), but wanted to start with a smaller one first to see how much the model improved with some additional finetuning.

Next steps would be finetuning on a larger dataset if through further testing, performance improvements are noticed.

Finetuning details

This is a QLoRA model and all modules were targeted.

lora_target_modules:
  - gate_proj
  - down_proj
  - up_proj
  - q_proj
  - v_proj
  - k_proj
  - o_proj
lora_modules_to_save:
  - embed_tokens
  - lm_head
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 3
- total_train_batch_size: 3
- total_eval_batch_size: 3
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 25
- num_epochs: 1

Optimizer paged_adamw_8bit and Deepspeed ZeRO 3 was used at a LR of 1e-5 using the cosine scheduler for 1 epoch on 3x3090s taking 4h 12m 13s total.

Sample packing and padding was disabled to reduce VRAM consumption significantly at the cost of speed.

W&B Run Summary

wandb: Run summary:
wandb:                eval/loss 1.00774
wandb:             eval/runtime 535.3847
wandb:  eval/samples_per_second 0.721
wandb:    eval/steps_per_second 0.241
wandb:               total_flos 4167452590080.0
wandb:              train/epoch 1.0
wandb:        train/global_step 1157
wandb:          train/grad_norm 4.50846
wandb:      train/learning_rate 0.0
wandb:               train/loss 1.4115
wandb:               train_loss 1.00352
wandb:            train_runtime 14921.1227
wandb: train_samples_per_second 0.233
wandb:   train_steps_per_second 0.078

Framework versions

  • PEFT 0.10.0
  • Transformers 4.40.0.dev0
  • Pytorch 2.3.0+cu121
  • Datasets 2.15.0
  • Tokenizers 0.15.0

Model Evaluation

TBD - submitted

If you have any questions or comments on the model, feel free to open a discussion in the community tab.

Built with Axolotl

Downloads last month
219
Safetensors
Model size
13B params
Tensor type
BF16
·
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train elinas/Llama-3-13B-Instruct-ft

Collection including elinas/Llama-3-13B-Instruct-ft