philschmid HF staff commited on
Commit
a5582fc
1 Parent(s): 1f9be24

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +33 -13
README.md CHANGED
@@ -39,25 +39,31 @@ should probably proofread and complete it, then remove this comment. -->
39
 
40
  # distilroberta-base-ner-wikiann-conll2003-4-class
41
 
42
- This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the wikiann-conll2003 dataset.
43
- It achieves the following results on the evaluation set:
44
- - Loss: 0.0705
45
- - Precision: 0.9492
46
- - Recall: 0.9585
47
- - F1: 0.9539
48
- - Accuracy: 0.9882
 
 
49
 
50
- ## Model description
51
 
52
- More information needed
 
 
53
 
54
- ## Intended uses & limitations
 
55
 
56
- More information needed
 
57
 
58
- ## Training and evaluation data
59
 
60
- More information needed
61
 
62
  ## Training procedure
63
 
@@ -75,6 +81,20 @@ The following hyperparameters were used during training:
75
 
76
  ### Training results
77
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
78
 
79
 
80
  ### Framework versions
 
39
 
40
  # distilroberta-base-ner-wikiann-conll2003-4-class
41
 
42
+ This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the wikiann and conll2003 dataset. It consists out of the classes of conll2003.
43
+
44
+ O (0), B-PER (1), I-PER (2), B-ORG (3), I-ORG (4) B-LOC (5), I-LOC (6) B-MISC (7), I-MISC (8).
45
+
46
+ eval F1-Score: **95,39** (merged dataset)
47
+ test F1-Score: **90,75** (merged dataset)
48
+
49
+ eval F1-Score: **95,39** (CoNLL-03)
50
+ test F1-Score: **90,75** (CoNLL-03)
51
 
52
+ ## Model Usage
53
 
54
+ ```python
55
+ from transformers import AutoTokenizer, AutoModelForTokenClassification
56
+ from transformers import pipeline
57
 
58
+ tokenizer = AutoTokenizer.from_pretrained("philschmid/distilroberta-base-ner-wikiann-conll2003-4-class")
59
+ model = AutoModelForTokenClassification.from_pretrained("philschmid/distilroberta-base-ner-wikiann-conll2003-4-class")
60
 
61
+ nlp = pipeline("ner", model=model, tokenizer=tokenizer, grouped_entities=True)
62
+ example = "My name is Philipp and live in Germany"
63
 
64
+ nlp(example)
65
 
66
+ ```
67
 
68
  ## Training procedure
69
 
 
81
 
82
  ### Training results
83
 
84
+ It achieves the following results on the evaluation set:
85
+ - Loss: 0.0705
86
+ - Precision: 0.9492
87
+ - Recall: 0.9585
88
+ - F1: 0.9539
89
+ - Accuracy: 0.9882
90
+
91
+ It achieves the following results on the test set:
92
+ - Loss: 0.239
93
+ - Precision: 0.8984
94
+ - Recall: 0.9168
95
+ - F1: 0.9075
96
+ - Accuracy: 0.9741
97
+
98
 
99
 
100
  ### Framework versions