cyrildever's picture
Update README.md
46461ac
metadata
license: mit
tags:
  - generated_from_trainer
datasets:
  - xtreme
metrics:
  - f1
model-index:
  - name: xlm-roberta-base-finetuned-panx-it
    results:
      - task:
          name: Token Classification
          type: token-classification
        dataset:
          name: xtreme
          type: xtreme
          config: PAN-X.it
          split: validation
          args: PAN-X.it
        metrics:
          - name: F1
            type: f1
            value: 0.835038886614818

xlm-roberta-base-finetuned-panx-it

This model is a fine-tuned version of xlm-roberta-base on the xtreme dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2401
  • F1: 0.8350

Model description

More information needed

Intended uses & limitations

This is a simple test from the O'Reilly's book "Natural Language Processing with Transformers". Not to use for anything but testing purposes.

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 24
  • eval_batch_size: 24
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 3

Training results

Training Loss Epoch Step Validation Loss F1
0.8197 1.0 70 0.3529 0.7242
0.2844 2.0 140 0.2484 0.8016
0.1861 3.0 210 0.2401 0.8350

Framework versions

  • Transformers 4.28.1
  • Pytorch 2.0.0
  • Datasets 2.1.0
  • Tokenizers 0.13.3