1 ---
2 license: apache-2.0
3 tags:
4 - generated_from_trainer
5 datasets:
6 - conll2003
7 metrics:
8 - precision
9 - recall
10 - f1
11 - accuracy
12 model-index:
13 - name: distilbert-base-uncased-finetuned-ner
14 results:
15 - task:
16 name: Token Classification
17 type: token-classification
18 dataset:
19 name: conll2003
20 type: conll2003
21 args: conll2003
22 metrics:
23 - name: Precision
24 type: precision
25 value: 0.9274238227146815
26 - name: Recall
27 type: recall
28 value: 0.9363463474661595
29 - name: F1
30 type: f1
31 value: 0.9318637274549098
32 - name: Accuracy
33 type: accuracy
34 value: 0.9839865283492462
35 ---
36
37 <!-- This model card has been generated automatically according to the information the Trainer had access to. You
38 should probably proofread and complete it, then remove this comment. -->
39
40 # distilbert-base-uncased-finetuned-ner
41
42 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
43 It achieves the following results on the evaluation set:
44 - Loss: 0.0614
45 - Precision: 0.9274
46 - Recall: 0.9363
47 - F1: 0.9319
48 - Accuracy: 0.9840
49
50 ## Model description
51
52 More information needed
53
54 ## Intended uses & limitations
55
56 More information needed
57
58 ## Training and evaluation data
59
60 More information needed
61
62 ## Training procedure
63
64 ### Training hyperparameters
65
66 The following hyperparameters were used during training:
67 - learning_rate: 2e-05
68 - train_batch_size: 16
69 - eval_batch_size: 16
70 - seed: 42
71 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
72 - lr_scheduler_type: linear
73 - num_epochs: 3
74
75 ### Training results
76
77 | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
78 |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
79 | 0.2403 | 1.0 | 878 | 0.0701 | 0.9101 | 0.9202 | 0.9151 | 0.9805 |
80 | 0.0508 | 2.0 | 1756 | 0.0600 | 0.9220 | 0.9350 | 0.9285 | 0.9833 |
81 | 0.0301 | 3.0 | 2634 | 0.0614 | 0.9274 | 0.9363 | 0.9319 | 0.9840 |
82
83
84 ### Framework versions
85
86 - Transformers 4.10.2
87 - Pytorch 1.9.0+cu102
88 - Datasets 1.12.0
89 - Tokenizers 0.10.3
90