fgiauna's picture
update model card README.md
581bcc8
|
raw
history blame
4.42 kB
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: camembert-ner-finetuned-jul
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# camembert-ner-finetuned-jul
This model is a fine-tuned version of [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0716
- Loc: {'precision': 0.7296511627906976, 'recall': 0.7943037974683544, 'f1': 0.7606060606060605, 'number': 316}
- Misc: {'precision': 0.7857142857142857, 'recall': 0.39285714285714285, 'f1': 0.5238095238095237, 'number': 56}
- Org: {'precision': 0.7745098039215687, 'recall': 0.7821782178217822, 'f1': 0.7783251231527093, 'number': 303}
- Per: {'precision': 0.8176100628930818, 'recall': 0.8074534161490683, 'f1': 0.8125000000000001, 'number': 322}
- Overall Precision: 0.7731
- Overall Recall: 0.7723
- Overall F1: 0.7727
- Overall Accuracy: 0.9826
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Loc | Misc | Org | Per | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| No log | 1.0 | 476 | 0.0740 | {'precision': 0.6106442577030813, 'recall': 0.689873417721519, 'f1': 0.6478454680534919, 'number': 316} | {'precision': 0.6666666666666666, 'recall': 0.2857142857142857, 'f1': 0.4, 'number': 56} | {'precision': 0.665680473372781, 'recall': 0.7425742574257426, 'f1': 0.7020280811232449, 'number': 303} | {'precision': 0.7469879518072289, 'recall': 0.7701863354037267, 'f1': 0.7584097859327217, 'number': 322} | 0.6727 | 0.7091 | 0.6904 | 0.9794 |
| 0.1185 | 2.0 | 952 | 0.0647 | {'precision': 0.7383720930232558, 'recall': 0.8037974683544303, 'f1': 0.7696969696969697, 'number': 316} | {'precision': 0.6363636363636364, 'recall': 0.375, 'f1': 0.47191011235955066, 'number': 56} | {'precision': 0.7966101694915254, 'recall': 0.7755775577557755, 'f1': 0.785953177257525, 'number': 303} | {'precision': 0.8158730158730159, 'recall': 0.7981366459627329, 'f1': 0.8069073783359498, 'number': 322} | 0.7771 | 0.7693 | 0.7732 | 0.9831 |
| 0.0509 | 3.0 | 1428 | 0.0716 | {'precision': 0.7296511627906976, 'recall': 0.7943037974683544, 'f1': 0.7606060606060605, 'number': 316} | {'precision': 0.7857142857142857, 'recall': 0.39285714285714285, 'f1': 0.5238095238095237, 'number': 56} | {'precision': 0.7745098039215687, 'recall': 0.7821782178217822, 'f1': 0.7783251231527093, 'number': 303} | {'precision': 0.8176100628930818, 'recall': 0.8074534161490683, 'f1': 0.8125000000000001, 'number': 322} | 0.7731 | 0.7723 | 0.7727 | 0.9826 |
### Framework versions
- Transformers 4.29.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3