Adriana213's picture
Update languages
1f299e5 verified
metadata
license: mit
base_model: xlm-roberta-base
tags:
  - generated_from_trainer
model-index:
  - name: xlm-roberta-base-finetuned-panx-all
    results: []
language:
  - en
  - de
  - it
  - fr
metrics:
  - f1
library_name: transformers

xlm-roberta-base-finetuned-panx-all

This model is a fine-tuned version of xlm-roberta-base on the XTREME PANX dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1758
  • F1 Score: 0.8558

Model description

This model is a fine-tuned version of xlm-roberta-base on a concatenated dataset combining multiple languages, specifically German (de) and French (fr). The model has been trained for token classification tasks and achieves competitive F1-scores across various languages.

Intended uses

Named Entity Recognition (NER) tasks across multiple languages. Token classification tasks that benefit from multilingual training data.

Limitations

Performance may vary on languages not seen during training. The model is fine-tuned on specific datasets and may require further fine-tuning or adjustments for other tasks or domains.

Training and evaluation data

The model was fine-tuned on a combination of German and French datasets, with the training data shuffled and concatenated to form a multilingual corpus. Additionally, the model was evaluated on multiple languages, showing robust performance across different linguistic datasets.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 24
  • eval_batch_size: 24
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 3

Training results

Training Loss Epoch Step Validation Loss F1 Score
0.299 1.0 835 0.2074 0.8078
0.1587 2.0 1670 0.1705 0.8461
0.1012 3.0 2505 0.1758 0.8558

Evaluation results

The model was evaluated on multiple languages, achieving the following F1-scores:

Evaluated on de fr it en
Fine-tune on
de 0.8658 0.7021 0.6877 0.5830
each 0.8658 0.8411 0.8180 0.6870
all 0.8685 0.8654 0.8669 0.7678

Framework versions

  • Transformers 4.41.1
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1