File size: 5,625 Bytes
65ec0e4
 
 
 
6bb4d35
02dea4d
65ec0e4
 
 
 
 
 
 
 
907432d
65ec0e4
02dea4d
65ec0e4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
907432d
65ec0e4
 
 
 
 
 
 
907432d
 
65ec0e4
 
 
 
 
 
 
 
 
 
 
 
 
 
907432d
f100f03
65ec0e4
 
 
 
 
398d751
 
 
65ec0e4
 
 
 
 
 
 
 
 
 
263e40a
 
907432d
 
263e40a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
65ec0e4
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132

---
---
license: mit
language:
- de
tags:
- zero-shot-classification
- text-classification
- pytorch
metrics:
- accuracy
- f1-score
---
# xlm-roberta-large-german-cap
## Model description
An `xlm-roberta-large` model finetuned on german training data labelled with [major topic codes](https://www.comparativeagendas.net/pages/master-codebook) from the [Comparative Agendas Project](https://www.comparativeagendas.net/).

## How to use the model
#### Loading and tokenizing input data
```python
import pandas as pd
import numpy as np
from datasets import Dataset
from transformers import (AutoModelForSequenceClassification, AutoTokenizer, 
                          Trainer, TrainingArguments)

CAP_NUM_DICT = {0: '1', 1: '2', 2: '3', 3: '4', 4: '5', 5: '6', 
6: '7', 7: '8', 8: '9', 9: '10', 10: '12', 11: '13', 12: '14', 
13: '15', 14: '16', 15: '17', 16: '18', 17: '19', 18: '20', 19: 
'21', 20: '23', 21: '999'}

tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-large')
num_labels = len(CAP_NUM_DICT)

def tokenize_dataset(data : pd.DataFrame):
    tokenized = tokenizer(data["text"],
                          max_length=MAXLEN,
                          truncation=True,
                          padding="max_length")
    return tokenized

hg_data = Dataset.from_pandas(data)
dataset = hg_data.map(tokenize_dataset, batched=True, remove_columns=hg_data.column_names)
```

#### Inference using the Trainer class
```python
model = AutoModelForSequenceClassification.from_pretrained('poltextlab/xlm-roberta-large-german-cap',
                                                           num_labels=num_labels,
                                                           problem_type="multi_label_classification",
                                                           ignore_mismatched_sizes=True
                                                           )

training_args = TrainingArguments(
    output_dir='.',
    per_device_train_batch_size=8,
    per_device_eval_batch_size=8
)

trainer = Trainer(
    model=model,
    args=training_args
)

probs = trainer.predict(test_dataset=dataset).predictions
predicted = pd.DataFrame(np.argmax(probs, axis=1)).replace({0: CAP_NUM_DICT}).rename(
    columns={0: 'predicted'}).reset_index(drop=True)

```

### Fine-tuning procedure
`xlm-roberta-large-german-cap` was fine-tuned using the Hugging Face Trainer class with the following hyperparameters:
```python
training_args = TrainingArguments(
    output_dir=f"../model/{model_dir}/tmp/",
    logging_dir=f"../logs/{model_dir}/",
    logging_strategy='epoch',
    num_train_epochs=10,
    per_device_train_batch_size=8,
    per_device_eval_batch_size=8,
    learning_rate=5e-06,
    seed=42,
    save_strategy='epoch',
    evaluation_strategy='epoch',
    save_total_limit=1,
    load_best_model_at_end=True
)
```
We also incorporated an EarlyStoppingCallback in the process with a patience of 2 epochs.

## Model performance
The model was evaluated on a test set of 6309 examples (10% of the available data).<br>
Model accuracy is **0.69**.
| label        |   precision |   recall |   f1-score |   support |
|:-------------|------------:|---------:|-----------:|----------:|
| 0            |        0.65 |     0.6  |       0.62 |       621 |
| 1            |        0.71 |     0.68 |       0.69 |       473 |
| 2            |        0.79 |     0.73 |       0.76 |       247 |
| 3            |        0.77 |     0.71 |       0.74 |       156 |
| 4            |        0.68 |     0.58 |       0.63 |       383 |
| 5            |        0.79 |     0.82 |       0.8  |       351 |
| 6            |        0.71 |     0.78 |       0.74 |       329 |
| 7            |        0.81 |     0.79 |       0.8  |       216 |
| 8            |        0.78 |     0.75 |       0.76 |       157 |
| 9            |        0.87 |     0.78 |       0.83 |       272 |
| 10           |        0.61 |     0.68 |       0.64 |       315 |
| 11           |        0.61 |     0.74 |       0.67 |       487 |
| 12           |        0.72 |     0.7  |       0.71 |       145 |
| 13           |        0.69 |     0.6  |       0.64 |       346 |
| 14           |        0.75 |     0.69 |       0.72 |       359 |
| 15           |        0.69 |     0.65 |       0.67 |       189 |
| 16           |        0.36 |     0.47 |       0.41 |        55 |
| 17           |        0.68 |     0.73 |       0.71 |       618 |
| 18           |        0.61 |     0.68 |       0.64 |       469 |
| 19           |        0    |     0    |       0    |        18 |
| 20           |        0.73 |     0.75 |       0.74 |       102 |
| 21           |        0    |     0    |       0    |         1 |
| macro avg    |        0.64 |     0.63 |       0.63 |      6309 |
| weighted avg |        0.7  |     0.69 |       0.69 |      6309 |

## Inference platform
This model is used by the [CAP Babel Machine](https://babel.poltextlab.com), an open-source and free natural language processing tool, designed to simplify and speed up projects for comparative research.  

## Cooperation
Model performance can be significantly improved by extending our training sets. We appreciate every submission of CAP-coded corpora (of any domain and language) at poltextlab{at}poltextlab{dot}com or by using the [CAP Babel Machine](https://babel.poltextlab.com).

## Debugging and issues
This architecture uses the `sentencepiece` tokenizer. In order to run the model before `transformers==4.27` you need to install it manually.

If you encounter a `RuntimeError` when loading the model using the `from_pretrained()` method, adding `ignore_mismatched_sizes=True` should solve the issue.