--- license: apache-2.0 library_name: peft tags: - generated_from_trainer metrics: - precision - recall - accuracy base_model: mistralai/Mistral-7B-v0.1 model-index: - name: mistral-lora-token-classification results: [] --- # mistral-lora-token-classification This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0675 - Precision: 0.7504 - Recall: 0.6836 - F1-score: 0.7154 - Accuracy: 0.7676 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1-score | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:--------:|:--------:| | 1.3948 | 1.0 | 762 | 1.2063 | 0.7208 | 0.6344 | 0.6748 | 0.7387 | | 0.8588 | 2.0 | 1524 | 1.1396 | 0.6313 | 0.8310 | 0.7175 | 0.7203 | | 0.7512 | 3.0 | 2286 | 1.0329 | 0.7281 | 0.7404 | 0.7342 | 0.7708 | | 0.6143 | 4.0 | 3048 | 1.0510 | 0.6917 | 0.7512 | 0.7202 | 0.7505 | | 0.5564 | 5.0 | 3810 | 1.0675 | 0.7504 | 0.6836 | 0.7154 | 0.7676 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.1.0+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2