File size: 3,760 Bytes
f2279a1 5fcb4e0 25f8892 f2279a1 39aaafd 25f8892 39aaafd 25f8892 39aaafd 25f8892 39aaafd 25f8892 39aaafd 25f8892 39aaafd 25f8892 39aaafd 25f8892 39aaafd 25f8892 39aaafd 25f8892 39aaafd 25f8892 39aaafd 25f8892 f2279a1 e50c845 f2279a1 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 |
---
license: mit
tags:
- generated_from_trainer
datasets:
- wikiann
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results:
- task:
type: token-classification
name: Token Classification
dataset:
name: wikiann
type: wikiann
config: en
split: test
metrics:
- type: accuracy
value: 0.843189280620875
name: Accuracy
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTk2NjMyYjhiYjVhZWNmNDZmYWJhNmFmOTJiMDk4OWU1N2MzMDUwNDRiYjg0OThiOTliODkzZDE2NjNjZDM4NSIsInZlcnNpb24iOjF9.FUb0cU1eQhdDCxT91Wjmcxnp9o-tsOAjoDlrlAsxO4Ypib41lUwMaokYgCE1tDB1SCjlcAm5ybMcO1l95H-7DA
- type: precision
value: 0.8410061269097046
name: Precision
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYzBjZGQxYjY1NDhhZDRiYjkzOWJlY2JkNzdlZjAzMDkwOTg2MDViNzg5NGJmMmY4YjY2ZjZkOGI5OWZhMWU3OCIsInZlcnNpb24iOjF9.bGWRbRXFY6tuPhCW6LI8ksTiJFY8cbYcQx0WSLzDoxTx5PznRNHYH2ooItbHAUnTfkyEPfIX8dLedoWbeMq6AA
- type: recall
value: 0.8568527450211155
name: Recall
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzU4MzhlMjZmNmIxYmFmMjQ1YzVhYjAyOTA5YjlmNzdiYmRjNGYxYzI3MDc5ODJhNWQ1ODg1ZmYzZWY3OTQwMCIsInZlcnNpb24iOjF9.pcpl_OAIRzTqQYu9ZfKKqiz3hDZrGHaIpdAJyV55wePltISeayTlRu0hd8nZDIHOfs5KbJwp642WyOQgbPuACQ
- type: f1
value: 0.8488554853827908
name: F1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYTYwOWI5MzcwYTg4ZWQ3YTEwNGU4ZDc4YWU2YjM2MTM5OGM2MWQyNTNlZTc5Nzg3ODQ3YjU0OGJiMzkxOWRhMyIsInZlcnNpb24iOjF9.IjmzhcjljeaJMWDz9Y8pI07vkjs3Ro3PN_MxcgoTS8wCLzm8WdONkpiOfdqyEw6_zlmBjq9KQgr5IrJIaZtiAA
- type: loss
value: 0.6632214784622192
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYTIwNjdmZWI0NTA0NTBkNjQwMzg4MTAzM2RjOWY3MzA4YTYyMDcwMThkMWEyMTFmNWY1ZjhkNjAyY2UxMTk0MSIsInZlcnNpb24iOjF9.SYJEfajz45YVWXxFZJCzSIvtClYOWXcApGzBsgPMr6sHSNpT7evVtC0oFOZ_FLbBPVrqS_wRnW_w6D-okS_NBw
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the PAN-X dataset. The model is trained in Chapter 4: Multilingual Named Entity Recognition in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/04_multilingual-ner.ipynb).
It achieves the following results on the evaluation set:
- Loss: 0.1739
- F1: 0.8581
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2912 | 1.0 | 835 | 0.1883 | 0.8238 |
| 0.1548 | 2.0 | 1670 | 0.1738 | 0.8480 |
| 0.101 | 3.0 | 2505 | 0.1739 | 0.8581 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|