Edit model card

Model Card

This is a laser fine tuning of Aloobun's great 1.5b param reyna mini model.

Model Description

This model is quite conversational - even a bit more so after laser tuning even using Peft. The function calling is mediocre, but will be improved in future versions.

Uses

As Aloobun's model is well performing and impressive on it's own, I decided to add some function calling while practicing the LaserRMT technique.

Direct Use

  • Chat
  • Conversational
  • Text Generation
  • Function Calling

Bias, Risks, and Limitations

This model will take over your house, borrow your car, talk badly to your family, and generally make everything incrementally worse. If you use it for nefarious purposes.

Recommendations

Use at your own risk. It's a great small model, owing to the base model before tuning.

Training Details

Training Data

  • "eval/loss": 2.1797242164611816,
  • "_timestamp": 1708624900.2239263,
  • "_runtime": 20945.370138406754,
  • "train/train_loss": 2.515587423102269,
  • "train/global_step": 918,
  • "train/train_steps_per_second": 0.044,
  • "train/loss": 2.2062,
  • "train/learning_rate": 0,
  • "train/train_samples_per_second": 1.403,
  • "train/train_runtime": 20945.6359,
  • "eval/steps_per_second": 4.867,
  • "eval/samples_per_second": 4.867,
  • "_step": 923,
  • "train/epoch": 2.98,
  • "eval/runtime": 41.0972,
  • "train/grad_norm": 0.2638521194458008,
  • "train/total_flos": 141790931224363000

Training Procedure

LaserRMT was used to refine the weights, using the 16 highest scored weights specifically by noise-to-ratio analysis.

This technique avoids training unnecessarily low-performng weights that can turn to garbage. By pruning these weights, the model size is decreased slightly.

axolotl

Axolotl was used for training and dataset tokenization.

Preprocessing

Dataset was formatted in ShareGpt format for the purposes of using with Axolotl, in conversational format.

Training Hyperparameters

  • lora_r: 64
  • lora_alpha: 16
  • lora_dropout: 0.05
  • gradient_accumulation_steps: 4
  • micro_batch_size: 1
  • num_epochs: 3
  • optimizer: adamw_bnb_8bit
  • lr_scheduler: cosine
  • learning_rate: 0.00025
Downloads last month
4
Safetensors
Model size
1.84B params
Tensor type
FP16
·

Dataset used to train jtatman/Reyna-Mini-1.8B-v0.2-function-call-laser

Collection including jtatman/Reyna-Mini-1.8B-v0.2-function-call-laser