--- license: apache-2.0 tags: - token-classification datasets: - wikiann metrics: - precision - recall - f1 - accuracy model-index: - name: distilroberta-base-ner-wikiann results: - task: name: Token Classification type: token-classification dataset: name: wikiann type: wikiann metrics: - name: Precision type: precision value: 0.8331921416757433 - name: Recall type: recall value: 0.84243586083126 - name: F1 type: f1 value: 0.8377885044416501 - name: Accuracy type: accuracy value: 0.91930707459758 --- # distilroberta-base-ner-wikiann This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the wikiann dataset. eval F1-Score: **83,78** test F1-Score: **83,76** ## Model Usage ```python from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline tokenizer = AutoTokenizer.from_pretrained("philschmid/distilroberta-base-ner-wikiann") model = AutoModelForTokenClassification.from_pretrained("philschmid/distilroberta-base-ner-wikiann") nlp = pipeline("ner", model=model, tokenizer=tokenizer, grouped_entities=True) example = "Jag heter Per och jag jobbar på KTH" nlp(example) ```