Commit
•
1deb254
1
Parent(s):
887d5f9
Update README.md
Browse files
README.md
CHANGED
@@ -40,24 +40,30 @@ should probably proofread and complete it, then remove this comment. -->
|
|
40 |
# distilroberta-base-ner-conll2003
|
41 |
|
42 |
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the conll2003 dataset.
|
43 |
-
It achieves the following results on the evaluation set:
|
44 |
-
- Loss: 0.0583
|
45 |
-
- Precision: 0.9493
|
46 |
-
- Recall: 0.9566
|
47 |
-
- F1: 0.9529
|
48 |
-
- Accuracy: 0.9883
|
49 |
|
50 |
-
|
|
|
|
|
|
|
|
|
51 |
|
52 |
-
More information needed
|
53 |
|
54 |
-
##
|
55 |
|
56 |
-
|
|
|
|
|
57 |
|
58 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
59 |
|
60 |
-
More information needed
|
61 |
|
62 |
## Training procedure
|
63 |
|
@@ -75,6 +81,37 @@ The following hyperparameters were used during training:
|
|
75 |
|
76 |
### Training results
|
77 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
78 |
|
79 |
|
80 |
### Framework versions
|
|
|
40 |
# distilroberta-base-ner-conll2003
|
41 |
|
42 |
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the conll2003 dataset.
|
|
|
|
|
|
|
|
|
|
|
|
|
43 |
|
44 |
+
eval F1-Score: 95,29 (CoNLL-03)
|
45 |
+
test F1-Score: 90,74 (CoNLL-03)
|
46 |
+
|
47 |
+
eval F1-Score: 95,29 (CoNLL++ / CoNLL-03 corrected)
|
48 |
+
test F1-Score: 92,23 (CoNLL++ / CoNLL-03 corrected)
|
49 |
|
|
|
50 |
|
51 |
+
## Model Usage
|
52 |
|
53 |
+
```python
|
54 |
+
from transformers import AutoTokenizer, AutoModelForTokenClassification
|
55 |
+
from transformers import pipeline
|
56 |
|
57 |
+
tokenizer = AutoTokenizer.from_pretrained("philschmid/distilroberta-base-ner-conll2003")
|
58 |
+
model = AutoModelForTokenClassification.from_pretrained("philschmid/distilroberta-base-ner-conll2003")
|
59 |
+
|
60 |
+
nlp = pipeline("ner", model=model, tokenizer=tokenizer,grouped_entities=True)
|
61 |
+
example = "My name is Philipp, I am a Machine Learning Engineer at HuggingFace and live in Nuremberg"
|
62 |
+
|
63 |
+
ner_results = nlp(example)
|
64 |
+
print(ner_results)
|
65 |
+
```
|
66 |
|
|
|
67 |
|
68 |
## Training procedure
|
69 |
|
|
|
81 |
|
82 |
### Training results
|
83 |
|
84 |
+
#### CoNNL2003
|
85 |
+
|
86 |
+
It achieves the following results on the evaluation set:
|
87 |
+
- Loss: 0.0583
|
88 |
+
- Precision: 0.9493
|
89 |
+
- Recall: 0.9566
|
90 |
+
- F1: 0.9529
|
91 |
+
- Accuracy: 0.9883
|
92 |
+
|
93 |
+
It achieves the following results on the test set:
|
94 |
+
- Loss: 0.2025
|
95 |
+
- Precision: 0.8999
|
96 |
+
- Recall: 0.915
|
97 |
+
- F1: 0.9074
|
98 |
+
- Accuracy: 0.9741
|
99 |
+
|
100 |
+
#### CoNNL++ / CoNLL2003 corrected
|
101 |
+
|
102 |
+
It achieves the following results on the evaluation set:
|
103 |
+
- Loss: 0.0567
|
104 |
+
- Precision: 0.9493
|
105 |
+
- Recall: 0.9566
|
106 |
+
- F1: 0.9529
|
107 |
+
- Accuracy: 0.9883
|
108 |
+
|
109 |
+
It achieves the following results on the test set:
|
110 |
+
- Loss: 0.1359
|
111 |
+
- Precision: 0.92
|
112 |
+
- Recall: 0.9245
|
113 |
+
- F1: 0.9223
|
114 |
+
- Accuracy: 0.9785
|
115 |
|
116 |
|
117 |
### Framework versions
|