File size: 2,675 Bytes
fe57383 fec41d6 fe57383 5d017f6 fe57383 5d017f6 fe57383 5d017f6 fe57383 5d017f6 fe57383 5d017f6 fe57383 5d017f6 fe57383 5d017f6 fe57383 5d017f6 fe57383 5d017f6 fe57383 5d017f6 fe57383 5d017f6 fe57383 5d017f6 fe57383 5d017f6 fe57383 c0449c4 fe57383 c0449c4 fe57383 397397b fe57383 397397b fe57383 8119e66 fe57383 c0449c4 fe57383 c0449c4 fe57383 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 |
---
license: apache-2.0
tags:
- token-classification
datasets:
- wikiann
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: ner-swedish-wikiann
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wikiann
type: wikiann
metrics:
- name: Precision
type: precision
value: 0.8331921416757433
- name: Recall
type: recall
value: 0.84243586083126
- name: F1
type: f1
value: 0.8377885044416501
- name: Accuracy
type: accuracy
value: 0.91930707459758
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ner-swedish-wikiann
This model is a fine-tuned version of [nordic-roberta-wiki](hhttps://huggingface.co/flax-community/nordic-roberta-wiki) trained for NER on the wikiann dataset.
eval F1-Score: **83,78**
test F1-Score: **83,76**
## Model Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("birgermoell/ner-swedish-wikiann")
model = AutoModelForTokenClassification.from_pretrained("birgermoell/ner-swedish-wikiann")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Jag heter Per och jag jobbar på KTH"
nlp(example)
```
<!--
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.9086903597787154e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
It achieves the following results on the evaluation set:
- Loss: 0.3156
- Precision: 0.8332
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("birgermoell/ner-swedish-wikiann")
model = AutoModelForTokenClassification.from_pretrained("birgermoell/ner-swedish-wikiann")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Jag heter Per och jag jobbar på KTH"
nlp(example)
- F1: 0.8378
- Accuracy: 0.9193
It achieves the following results on the test set:
- Loss: 0.3023
- Precision: 0.8301
- Recall: 0.8452
- F1: 0.8376
- Accuracy: 0.92
### Framework versions
- Transformers 4.6.1
- Pytorch 1.8.1+cu101
- Datasets 1.6.2
- Tokenizers 0.10.2
-->
|