--- license: mit tags: - generated_from_trainer model-index: - name: peft-lora-jul results: [] --- # peft-lora-jul This model is a fine-tuned version of [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0817 - Loc: {'precision': 0.5887445887445888, 'recall': 0.6296296296296297, 'f1': 0.6085011185682326, 'number': 216} - Misc: {'precision': 0.6111111111111112, 'recall': 0.275, 'f1': 0.3793103448275862, 'number': 40} - Org: {'precision': 0.7004830917874396, 'recall': 0.725, 'f1': 0.7125307125307125, 'number': 200} - Per: {'precision': 0.7540106951871658, 'recall': 0.7193877551020408, 'f1': 0.7362924281984334, 'number': 196} - Overall Precision: 0.6734 - Overall Recall: 0.6641 - Overall F1: 0.6687 - Overall Accuracy: 0.9772 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3