poltextlab's picture
Upload README.md with huggingface_hub
63d5cf8
|
raw
history blame
5.74 kB
metadata
license: mit
language:
  - multilingual
tags:
  - zero-shot-classification
  - text-classification
  - pytorch
metrics:
  - accuracy
  - f1-score

xlm-roberta-large-hungarian-execspeech-cap-v3

Model description

An xlm-roberta-large model finetuned on multilingual training data containing texts of the execspeech domain labelled with major topic codes from the Comparative Agendas Project.

How to use the model

Loading and tokenizing input data

import pandas as pd
import numpy as np
from datasets import Dataset
from transformers import (AutoModelForSequenceClassification, AutoTokenizer, 
                          Trainer, TrainingArguments)

CAP_NUM_DICT = {0: '1', 1: '2', 2: '3', 3: '4', 4: '5', 5: '6', 
6: '7', 7: '8', 8: '9', 9: '10', 10: '12', 11: '13', 12: '14', 
13: '15', 14: '16', 15: '17', 16: '18', 17: '19', 18: '20', 19: 
'21', 20: '23', 21: '999'}

tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-large')
num_labels = len(CAP_NUM_DICT)

def tokenize_dataset(data : pd.DataFrame):
    tokenized = tokenizer(data["text"],
                          max_length=MAXLEN,
                          truncation=True,
                          padding="max_length")
    return tokenized

hg_data = Dataset.from_pandas(data)
dataset = hg_data.map(tokenize_dataset, batched=True, remove_columns=hg_data.column_names)

Inference using the Trainer class

model = AutoModelForSequenceClassification.from_pretrained('poltextlab/xlm-roberta-large-hungarian-execspeech-cap-v3',
                                                           num_labels=num_labels,
                                                           problem_type="multi_label_classification",
                                                           ignore_mismatched_sizes=True
                                                           )

training_args = TrainingArguments(
    output_dir='.',
    per_device_train_batch_size=8,
    per_device_eval_batch_size=8
)

trainer = Trainer(
    model=model,
    args=training_args
)

probs = trainer.predict(test_dataset=dataset).predictions
predicted = pd.DataFrame(np.argmax(probs, axis=1)).replace({0: CAP_NUM_DICT}).rename(
    columns={0: 'predicted'}).reset_index(drop=True)

Fine-tuning procedure

xlm-roberta-large-hungarian-execspeech-cap-v3 was fine-tuned using the Hugging Face Trainer class with the following hyperparameters:

training_args = TrainingArguments(
    output_dir=f"../model/{model_dir}/tmp/",
    logging_dir=f"../logs/{model_dir}/",
    logging_strategy='epoch',
    num_train_epochs=10,
    per_device_train_batch_size=8,
    per_device_eval_batch_size=8,
    learning_rate=5e-06,
    seed=42,
    save_strategy='epoch',
    evaluation_strategy='epoch',
    save_total_limit=1,
    load_best_model_at_end=True
)

We also incorporated an EarlyStoppingCallback in the process with a patience of 2 epochs.

Model performance

The model was evaluated on a test set of 16785 examples (10% of the available data).
Model accuracy is 0.66.

label precision recall f1-score support
0 0.57 0.65 0.61 1323
1 0.57 0.52 0.55 876
2 0.73 0.75 0.74 691
3 0.72 0.6 0.66 182
4 0.61 0.56 0.58 545
5 0.73 0.55 0.63 220
6 0.8 0.56 0.66 380
7 0.78 0.67 0.72 163
8 0.68 0.6 0.64 436
9 0.75 0.72 0.74 115
10 0.51 0.54 0.53 229
11 0.55 0.39 0.46 95
12 0.59 0.39 0.47 198
13 0.62 0.44 0.51 568
14 0.51 0.53 0.52 200
15 0.52 0.54 0.53 214
16 0.52 0.29 0.37 389
17 0.66 0.65 0.65 2496
18 0.64 0.5 0.56 1486
19 0.56 0.36 0.44 182
20 0.55 0.31 0.4 151
21 0.7 0.83 0.76 5646
macro avg 0.63 0.54 0.58 16785
weighted avg 0.65 0.66 0.65 16785

Inference platform

This model is used by the CAP Babel Machine, an open-source and free natural language processing tool, designed to simplify and speed up projects for comparative research.

Cooperation

Model performance can be significantly improved by extending our training sets. We appreciate every submission of CAP-coded corpora (of any domain and language) at poltextlab{at}poltextlab{dot}com or by using the CAP Babel Machine.

Debugging and issues

This architecture uses the sentencepiece tokenizer. In order to run the model before transformers==4.27 you need to install it manually.

If you encounter a RuntimeError when loading the model using the from_pretrained() method, adding ignore_mismatched_sizes=True should solve the issue.