--- base_model: meta-llama/Llama-2-13b-hf library_name: peft license: llama2 metrics: - accuracy tags: - generated_from_trainer model-index: - name: llama2_13B_LORA_FOR_CLASSIFICATION results: [] --- # llama2_13B_LORA_FOR_CLASSIFICATION This model is a fine-tuned version of [meta-llama/Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5708 - Balanced Accuracy: 0.7079 - Accuracy: 0.7530 - Micro F1: 0.7530 - Macro F1: 0.6771 - Weighted F1: 0.7669 - Classification Report: precision recall f1-score support 0 0.89 0.79 0.83 857 1 0.44 0.63 0.52 232 accuracy 0.75 1089 macro avg 0.67 0.71 0.68 1089 weighted avg 0.79 0.75 0.77 1089 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 24 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Accuracy | Balanced Accuracy | Classification Report | Validation Loss | Macro F1 | Micro F1 | Weighted F1 | |:-------------:|:-----:|:----:|:--------:|:-----------------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:---------------:|:--------:|:--------:|:-----------:| | 0.4853 | 2.0 | 522 | 0.7750 | 0.7297 | precision recall f1-score support 0 0.90 0.81 0.85 857 1 0.48 0.65 0.55 232 accuracy 0.78 1089 macro avg 0.69 0.73 0.70 1089 weighted avg 0.81 0.78 0.79 1089 | 0.5482 | 0.7009 | 0.7750 | 0.7864 | | 0.4116 | 3.0 | 783 | 0.7668 | 0.7182 | precision recall f1-score support 0 0.89 0.80 0.84 857 1 0.47 0.63 0.54 232 accuracy 0.77 1089 macro avg 0.68 0.72 0.69 1089 weighted avg 0.80 0.77 0.78 1089 | 0.5497 | 0.6903 | 0.7668 | 0.7786 | | 0.3224 | 4.0 | 1044 | 0.5708 | 0.7079 | 0.7530 | 0.7530 | 0.6771 | 0.7669 | precision recall f1-score support 0 0.89 0.79 0.83 857 1 0.44 0.63 0.52 232 accuracy 0.75 1089 macro avg 0.67 0.71 0.68 1089 weighted avg 0.79 0.75 0.77 1089 | ### Framework versions - PEFT 0.11.1 - Transformers 4.41.2 - Pytorch 2.3.0+cu118 - Datasets 2.20.0 - Tokenizers 0.19.1