--- license: apache-2.0 tags: - token-classification datasets: - wikiann metrics: - precision - recall - f1 - accuracy model-index: - name: distilroberta-base-ner-wikiann results: - task: type: token-classification name: Token Classification dataset: name: wikiann type: wikiann metrics: - type: precision value: 0.8331921416757433 name: Precision - type: recall value: 0.84243586083126 name: Recall - type: f1 value: 0.8377885044416501 name: F1 - type: accuracy value: 0.91930707459758 name: Accuracy - task: type: token-classification name: Token Classification dataset: name: wikiann type: wikiann config: en split: test metrics: - type: accuracy value: 0.9200373733433721 name: Accuracy verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNGFmMTNkZDYwMDllNjE5ZTVjYzYwYTQyMDFjYzNkYTkxZmVmOTNkOTFlOTU4MmM2MmFlMWQzMTcwZGViOTA3ZCIsInZlcnNpb24iOjF9.pOwPcBmA7XJdq9QgCNoCivTsu0WfsCnvRtzObDrqhFtrO2PjLNf9tmlQeahGcBGFo6yIHvhndBYwf__lN-4nBg - type: precision value: 0.9258482820953792 name: Precision verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzFhNGJlMzk0N2JmYmU3YjAxZjJjNGFjZjZjOTJhODc3MjQyODMzYzE2Y2Y4NWQ4YThhMjg3NWI1MGRmODczMiIsInZlcnNpb24iOjF9.eVTQJqXeGY0XZaGURXBrT8sjMl7O_SxuFB4NS7C6jbpr46MMZdusvzkmndOIrGjReB2vB3sAmpcT0hydpqRkDg - type: recall value: 0.9347545055892119 name: Recall verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiN2Y5ZGIzM2JlOWNjZGUzOWU5MGIwOTFiODM4NmU3NGQ3ZmUxYzM4ZmYxNjIwOTE0ZWFiYWJhMzk4NDg4ZjI3MSIsInZlcnNpb24iOjF9.tzl3gTEDFuj7kpGsERkQzXfh7B0Qwao31VcXKF1rSvf3ulVgXsU-vTB2oZiGr3w5AySr_80J0pIpSpvGzfhNAQ - type: f1 value: 0.9302800779500893 name: F1 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYjY5MDM2ZWQ1MzJmNDFhMGFmZmQ1MzM0NmJmOTVmYTM1OWZmNzc4YWI4ZWUwMTFlMTQ5MTJmYWRhNmVmZTUyZCIsInZlcnNpb24iOjF9.zMUq4ZGLfu0eQF7lHNkaf6LByypIevygVGLpBA3jW80OUy5VeZDK7d6q0RV_N4SO5gTkLEjoDvSqLDcaw-9VBw - type: loss value: 0.3007512390613556 name: loss verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNzI5YmIxODFkN2NkYzJkZDgyZTc4MDhlMDkyMzM3NWFiZWQ1MmUzMDA1MGYyM2RlNzVlNTIwNDcwNTFmNjYwMSIsInZlcnNpb24iOjF9.D8vx5YhoNHY4CdRXEt3rL95odR2kZJ1e_c34HD28xX9YeWKIjjt4E0FSz6Xw4ufJd9UlCnQ_u4VPFTYI-RXlCQ --- # distilroberta-base-ner-wikiann This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the wikiann dataset. eval F1-Score: **83,78** test F1-Score: **83,76** ## Model Usage ```python from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline tokenizer = AutoTokenizer.from_pretrained("philschmid/distilroberta-base-ner-wikiann") model = AutoModelForTokenClassification.from_pretrained("philschmid/distilroberta-base-ner-wikiann") nlp = pipeline("ner", model=model, tokenizer=tokenizer, grouped_entities=True) example = "My name is Philipp and live in Germany" nlp(example) ``` ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4.9086903597787154e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 - mixed_precision_training: Native AMP ### Training results It achieves the following results on the evaluation set: - Loss: 0.3156 - Precision: 0.8332 - Recall: 0.8424 - F1: 0.8378 - Accuracy: 0.9193 It achieves the following results on the test set: - Loss: 0.3023 - Precision: 0.8301 - Recall: 0.8452 - F1: 0.8376 - Accuracy: 0.92 ### Framework versions - Transformers 4.6.1 - Pytorch 1.8.1+cu101 - Datasets 1.6.2 - Tokenizers 0.10.2