Edit model card

classifier_v5

This model is a fine-tuned version of xlm-roberta-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3360
  • Accuracy: 0.9548
  • F1: 0.9548
  • Precision: 0.9548
  • Recall: 0.9548

Model description

The model is used to classify text of the interactions for users to decided if they belong to:

  • education
  • general
  • maths
  • news
  • restaurants
  • weather

Training and evaluation data

The data used is the dataset built on:

  • MASSIVE is a parallel dataset of > 1M utterances across 51 languages with annotations for the Natural Language Understanding tasks of intent prediction and slot annotation. Utterances span 60 intents and include 55 slot types. MASSIVE was created by localizing the SLURP dataset, composed of general Intelligent Voice Assistant single-shot interactions.
  • And added some interactions from our own app to it.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.9999) and epsilon=1e-08
  • lr_scheduler_type: constant_with_warmup
  • lr_scheduler_warmup_steps: 800
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Accuracy F1 Precision Recall
0.7727 1.0 856 0.3245 0.9338 0.9338 0.9338 0.9338
0.2747 2.0 1712 0.2847 0.9413 0.9413 0.9413 0.9413
0.1488 3.0 2568 0.3170 0.9473 0.9473 0.9473 0.9473
0.0911 4.0 3424 0.3333 0.9488 0.9488 0.9488 0.9488
0.0642 5.0 4280 0.3360 0.9549 0.9549 0.9549 0.9549

Framework versions

  • Transformers 4.34.1
  • Pytorch 2.0.1+cu117
  • Datasets 2.14.5
  • Tokenizers 0.14.1

Evaluation results

On test dataset:

              precision    recall  f1-score   support

   education       0.99      0.87      0.93       189
     general       0.91      0.95      0.93       594
       maths       1.00      0.95      0.97        75
        news       0.93      0.95      0.94       372
 restaurants       0.97      0.99      0.98       165
     weather       0.99      0.96      0.97       468

    accuracy                           0.95      1863
   macro avg       0.97      0.95      0.95      1863
weighted avg       0.95      0.95      0.95      1863

Tagged dataset from direct responses on the app:

              precision    recall  f1-score   support

   education       0.48      0.55      0.51       229
     general       0.80      0.79      0.79      1367
       maths       0.96      0.76      0.85       231
        news       0.45      0.60      0.51       188
 restaurants       0.86      0.95      0.90       158
     weather       0.91      0.65      0.76       186

    accuracy                           0.75      2359
   macro avg       0.74      0.72      0.72      2359
weighted avg       0.77      0.75      0.75      2359

Demo: How to use in HuggingFace Transformers Pipeline

Requires transformers: pip install transformers

from transformers import AutoTokenizer, AutoModelForSequenceClassification, TextClassificationPipeline
model_name = 'SoyLuzia/classifier_v5'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
classifier = TextClassificationPipeline(model=model, tokenizer=tokenizer)
res = classifier("¿Qué tiempo hace hoy en Villena?")
print(res)

Outputs:

[{'label': 'weather', 'score': 0.9999312162399292}]
Downloads last month
57
Safetensors
Model size
278M params
Tensor type
F32
·

Finetuned from

Dataset used to train SoyLuzia/classifier_v5