Edit model card

xlm-roberta-large-english-media-cap

Model description

An xlm-roberta-large model finetuned on english training data containing texts of the media domain labelled with major topic codes from the Comparative Agendas Project.

How to use the model

Loading and tokenizing input data

import pandas as pd
import numpy as np
from datasets import Dataset
from transformers import (AutoModelForSequenceClassification, AutoTokenizer, 
                          Trainer, TrainingArguments)

CAP_NUM_DICT = {0: '1', 1: '2', 2: '3', 3: '4', 4: '5', 5: '6', 
6: '7', 7: '8', 8: '9', 9: '10', 10: '12', 11: '13', 12: '14', 
13: '15', 14: '16', 15: '17', 16: '18', 17: '19', 18: '20', 19: 
'21', 20: '23', 21: '999'}

tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-large')
num_labels = len(CAP_NUM_DICT)

def tokenize_dataset(data : pd.DataFrame):
    tokenized = tokenizer(data["text"],
                          max_length=MAXLEN,
                          truncation=True,
                          padding="max_length")
    return tokenized

hg_data = Dataset.from_pandas(data)
dataset = hg_data.map(tokenize_dataset, batched=True, remove_columns=hg_data.column_names)

Inference using the Trainer class

model = AutoModelForSequenceClassification.from_pretrained('poltextlab/xlm-roberta-large-english-media-cap',
                                                           num_labels=num_labels,
                                                           problem_type="multi_label_classification",
                                                           ignore_mismatched_sizes=True
                                                           )

training_args = TrainingArguments(
    output_dir='.',
    per_device_train_batch_size=8,
    per_device_eval_batch_size=8
)

trainer = Trainer(
    model=model,
    args=training_args
)

probs = trainer.predict(test_dataset=dataset).predictions
predicted = pd.DataFrame(np.argmax(probs, axis=1)).replace({0: CAP_NUM_DICT}).rename(
    columns={0: 'predicted'}).reset_index(drop=True)

Fine-tuning procedure

xlm-roberta-large-english-media-cap was fine-tuned using the Hugging Face Trainer class with the following hyperparameters:

training_args = TrainingArguments(
    output_dir=f"../model/{model_dir}/tmp/",
    logging_dir=f"../logs/{model_dir}/",
    logging_strategy='epoch',
    num_train_epochs=10,
    per_device_train_batch_size=8,
    per_device_eval_batch_size=8,
    learning_rate=5e-06,
    seed=42,
    save_strategy='epoch',
    evaluation_strategy='epoch',
    save_total_limit=1,
    load_best_model_at_end=True
)

We also incorporated an EarlyStoppingCallback in the process with a patience of 2 epochs.

Model performance

The model was evaluated on a test set of 13802 examples (10% of the available data).
Model accuracy is 0.78.

label precision recall f1-score support
0 0.75 0.8 0.77 618
1 0.75 0.61 0.67 385
2 0.86 0.79 0.82 780
3 0.72 0.71 0.71 143
4 0.68 0.64 0.66 312
5 0.83 0.89 0.86 746
6 0.79 0.83 0.81 407
7 0.81 0.82 0.81 406
8 0.59 0.55 0.56 44
9 0.8 0.81 0.81 683
10 0.81 0.8 0.8 1297
11 0.65 0.69 0.67 167
12 0.64 0.74 0.69 345
13 0.76 0.74 0.75 1068
14 0.75 0.77 0.76 1168
15 0.73 0.64 0.68 306
16 0.78 0.51 0.61 152
17 0.77 0.84 0.81 1775
18 0.84 0.82 0.83 2475
19 0.69 0.53 0.6 158
20 0.62 0.71 0.66 367
21 0 0 0 0
macro avg 0.71 0.69 0.7 13802
weighted avg 0.78 0.78 0.78 13802

Inference platform

This model is used by the CAP Babel Machine, an open-source and free natural language processing tool, designed to simplify and speed up projects for comparative research.

Cooperation

Model performance can be significantly improved by extending our training sets. We appreciate every submission of CAP-coded corpora (of any domain and language) at poltextlab{at}poltextlab{dot}com or by using the CAP Babel Machine.

Debugging and issues

This architecture uses the sentencepiece tokenizer. In order to run the model before transformers==4.27 you need to install it manually.

If you encounter a RuntimeError when loading the model using the from_pretrained() method, adding ignore_mismatched_sizes=True should solve the issue.

Downloads last month
6