Adriana213 commited on
Commit
4cb1f1c
1 Parent(s): 904af79

Update Model Card

Browse files
Files changed (1) hide show
  1. README.md +21 -8
README.md CHANGED
@@ -8,27 +8,30 @@ model-index:
8
  results: []
9
  ---
10
 
11
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
- should probably proofread and complete it, then remove this comment. -->
13
-
14
  # xlm-roberta-base-finetuned-panx-all
15
 
16
- This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
17
  It achieves the following results on the evaluation set:
18
  - Loss: 0.1758
19
  - F1 Score: 0.8558
20
 
21
  ## Model description
22
 
23
- More information needed
 
 
 
 
 
24
 
25
- ## Intended uses & limitations
26
 
27
- More information needed
 
28
 
29
  ## Training and evaluation data
30
 
31
- More information needed
32
 
33
  ## Training procedure
34
 
@@ -51,6 +54,16 @@ The following hyperparameters were used during training:
51
  | 0.1587 | 2.0 | 1670 | 0.1705 | 0.8461 |
52
  | 0.1012 | 3.0 | 2505 | 0.1758 | 0.8558 |
53
 
 
 
 
 
 
 
 
 
 
 
54
 
55
  ### Framework versions
56
 
 
8
  results: []
9
  ---
10
 
 
 
 
11
  # xlm-roberta-base-finetuned-panx-all
12
 
13
+ This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the XTREME PANX dataset.
14
  It achieves the following results on the evaluation set:
15
  - Loss: 0.1758
16
  - F1 Score: 0.8558
17
 
18
  ## Model description
19
 
20
+ This model is a fine-tuned version of xlm-roberta-base on a concatenated dataset combining multiple languages, specifically German (de) and French (fr). The model has been trained for token classification tasks and achieves competitive F1-scores across various languages.
21
+
22
+ ## Intended uses
23
+
24
+ Named Entity Recognition (NER) tasks across multiple languages.
25
+ Token classification tasks that benefit from multilingual training data.
26
 
27
+ ## Limitations
28
 
29
+ Performance may vary on languages not seen during training.
30
+ The model is fine-tuned on specific datasets and may require further fine-tuning or adjustments for other tasks or domains.
31
 
32
  ## Training and evaluation data
33
 
34
+ The model was fine-tuned on a combination of German and French datasets, with the training data shuffled and concatenated to form a multilingual corpus. Additionally, the model was evaluated on multiple languages, showing robust performance across different linguistic datasets.
35
 
36
  ## Training procedure
37
 
 
54
  | 0.1587 | 2.0 | 1670 | 0.1705 | 0.8461 |
55
  | 0.1012 | 3.0 | 2505 | 0.1758 | 0.8558 |
56
 
57
+ ### Evaluation results
58
+
59
+ The model was evaluated on multiple languages, achieving the following F1-scores:
60
+
61
+ | Evaluated on | de | fr | it | en |
62
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|
63
+ | Fine-tune on | | | | |
64
+ | de |0.8658 | 0.7021 | 0.6877 | 0.5830 |
65
+ | each |0.8658 | 0.8411 | 0.8180 | 0.6870 |
66
+ | all |0.8685 | 0.8654 | 0.8669 | 0.7678 |
67
 
68
  ### Framework versions
69