--- license: llama3 library_name: peft tags: - trl - sft - generated_from_trainer base_model: meta-llama/Meta-Llama-3-8B-Instruct datasets: - generator model-index: - name: cls_train_llama3_v1 results: [] --- # cls_train_llama3_v1 This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 0.6482 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.8592 | 0.2146 | 50 | 0.8151 | | 0.7874 | 0.4292 | 100 | 0.7651 | | 0.6932 | 0.6438 | 150 | 0.7275 | | 0.6738 | 0.8584 | 200 | 0.7003 | | 0.5692 | 1.0730 | 250 | 0.6846 | | 0.5493 | 1.2876 | 300 | 0.6756 | | 0.5267 | 1.5021 | 350 | 0.6653 | | 0.595 | 1.7167 | 400 | 0.6550 | | 0.5441 | 1.9313 | 450 | 0.6482 | ### Framework versions - PEFT 0.11.1 - Transformers 4.41.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1