juliensimon's picture
Update README.md
d322b88
|
raw
history blame
3.44 kB
metadata
license: mit
tags:
  - generated_from_trainer
  - language-identification
  - openvino
datasets:
  - fleurs
metrics:
  - accuracy
model-index:
  - name: xlm-v-base-language-id
    results:
      - task:
          name: Text Classification
          type: text-classification
        dataset:
          name: fleurs
          type: fleurs
          config: all
          split: validation
          args: all
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.9930337861372344
pipeline_tag: text-classification

xlm-v-base-language-id

This model is a fine-tuned version of facebook/xlm-v-base on the google/fleurs dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0241
  • Accuracy: 0.9930

Usage

The simplest way to use the model is with a text classification pipeline:

from transformers import pipeline

model_id = "juliensimon/xlm-v-base-language-id"
p = pipeline("text-classification", model=model_id)
p("Hello world")
# [{'label': 'English', 'score': 0.9802148342132568}]

The model is also compatible with Optimum Intel. For example, you can optimize it with Intel OpenVINO and enjoy a 2x inference speedup (or more).

from optimum.intel.openvino import OVModelForSequenceClassification
from transformers import (AutoModelForSequenceClassification, AutoTokenizer,
                          pipeline)

model_id = "juliensimon/xlm-v-base-language-id"
ov_model = OVModelForSequenceClassification.from_pretrained(
    model_id, from_transformers=True
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
p = pipeline("text-classification", model=ov_model, tokenizer=tokenizer)
p("Hello world")
# [{'label': 'English', 'score': 0.9802149534225464}]

Intended uses & limitations

The model can accurately detect 102 languages. You can find the list on the dataset page.

Training and evaluation data

The model has been trained and evaluated on the complete google/fleurs training and validation sets.

Training procedure

The training script is included in the repository. The model has been trained on an p3dn.24xlarge instance on AWS (8 NVIDIA V100 GPUs).

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 128
  • eval_batch_size: 128
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 512
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 5
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Accuracy
0.6368 1.0 531 0.4593 0.9689
0.059 2.0 1062 0.0412 0.9899
0.0311 3.0 1593 0.0275 0.9918
0.0255 4.0 2124 0.0243 0.9928
0.017 5.0 2655 0.0241 0.9930

Framework versions

  • Transformers 4.26.0
  • Pytorch 1.13.1
  • Datasets 2.8.0
  • Tokenizers 0.13.2