--- library_name: peft license: llama3.1 base_model: meta-llama/Llama-3.1-8B-Instruct tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: llama-7b-stance results: [] --- # llama-7b-stance This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0488 - Accuracy: 0.5583 - Precision: 0.5489 - Recall: 0.5339 - F1: 0.5316 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | No log | 1.0 | 23 | 1.6614 | 0.3502 | 0.3641 | 0.3722 | 0.3428 | | No log | 2.0 | 46 | 1.3645 | 0.4305 | 0.4308 | 0.4285 | 0.4071 | | No log | 3.0 | 69 | 1.1866 | 0.4893 | 0.4687 | 0.4613 | 0.4567 | | No log | 4.0 | 92 | 1.1019 | 0.5343 | 0.5079 | 0.4942 | 0.4964 | | No log | 5.0 | 115 | 1.0842 | 0.5516 | 0.5335 | 0.4926 | 0.4995 | | No log | 6.0 | 138 | 1.0671 | 0.5634 | 0.5589 | 0.5165 | 0.5210 | | No log | 7.0 | 161 | 1.0930 | 0.5435 | 0.5430 | 0.5265 | 0.5195 | | No log | 8.0 | 184 | 1.0652 | 0.5440 | 0.5324 | 0.5368 | 0.5260 | | No log | 9.0 | 207 | 1.0162 | 0.5619 | 0.5352 | 0.5279 | 0.5295 | | No log | 10.0 | 230 | 1.0488 | 0.5583 | 0.5489 | 0.5339 | 0.5316 | ### Framework versions - PEFT 0.14.0 - Transformers 4.47.1 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0