File size: 4,500 Bytes
2a3332a
 
 
 
 
 
 
 
 
 
 
 
6b15e41
2a3332a
100e7cf
2a3332a
 
100e7cf
2a3332a
100e7cf
 
2a3332a
100e7cf
 
2a3332a
100e7cf
2a3332a
100e7cf
 
 
 
 
 
 
 
 
 
 
 
 
 
2a3332a
 
 
100e7cf
2a3332a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
100e7cf
 
 
 
 
580cb9c
100e7cf
 
2a3332a
 
 
 
 
 
100e7cf
 
 
2a3332a
100e7cf
2a3332a
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95

---
---
language:
- en
tags:
- text-classification
- pytorch
metrics:
- accuracy
- f1-score
---
# xlm-roberta-large-english-cap-v3
## Model description
An `xlm-roberta-large` model fine-tuned on english training data labeled with [major topic codes](https://www.comparativeagendas.net/pages/master-codebook) from the [Comparative Agendas Project](https://www.comparativeagendas.net/).

## How to use the model
This snippet prints the three most probable labels and their corresponding softmax scores:
```python
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer

model = AutoModelForSequenceClassification.from_pretrained("poltextlab/xlm-roberta-large-english-cap-v3")
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-large")

sentence = "This is an example."

inputs = tokenizer(sentence,
                   return_tensors="pt",
                   max_length=512,
                   padding="do_not_pad",
                   truncation=True
                   )

logits = model(**inputs).logits

probs = torch.softmax(logits, dim=1).tolist()[0]
probs = {model.config.id2label[index]: round(probability, 2) for index, probability in enumerate(probs)}
top3_probs = dict(sorted(probs.items(), key=lambda item: item[1], reverse=True)[:3])

print(top3_probs)
```

## Model performance
The model was evaluated on a test set of 91823 examples.<br>
Model accuracy is **0.84**.
| label        |   precision |   recall |   f1-score |   support |
|:-------------|------------:|---------:|-----------:|----------:|
| 0            |        0.75 |     0.8  |       0.77 |      4671 |
| 1            |        0.79 |     0.74 |       0.76 |      2288 |
| 2            |        0.88 |     0.89 |       0.89 |      5732 |
| 3            |        0.87 |     0.87 |       0.87 |      2611 |
| 4            |        0.81 |     0.78 |       0.79 |      2819 |
| 5            |        0.86 |     0.89 |       0.88 |      3999 |
| 6            |        0.83 |     0.84 |       0.84 |      3044 |
| 7            |        0.9  |     0.84 |       0.87 |      2547 |
| 8            |        0.82 |     0.83 |       0.83 |       912 |
| 9            |        0.87 |     0.89 |       0.88 |      4322 |
| 10           |        0.84 |     0.85 |       0.85 |      6558 |
| 11           |        0.83 |     0.8  |       0.81 |      2965 |
| 12           |        0.74 |     0.82 |       0.78 |      1955 |
| 13           |        0.81 |     0.82 |       0.82 |      5422 |
| 14           |        0.83 |     0.82 |       0.83 |      6636 |
| 15           |        0.82 |     0.78 |       0.8  |      1580 |
| 16           |        0.87 |     0.85 |       0.86 |      2425 |
| 17           |        0.76 |     0.82 |       0.79 |      5700 |
| 18           |        0.86 |     0.82 |       0.84 |     10700 |
| 19           |        0.85 |     0.88 |       0.87 |      4761 |
| 20           |        0.76 |     0.76 |       0.76 |      2622 |
| 21           |        0.93 |     0.91 |       0.92 |      7554 |
| macro avg    |        0.83 |     0.83 |       0.83 |     91823 |
| weighted avg |        0.84 |     0.84 |       0.84 |     91823 |

### Fine-tuning procedure
This model was fine-tuned with the following key hyperparameters:

- **Number of Training Epochs**: 10
- **Batch Size**: 8
- **Learning Rate**: 5e-06
- **Early Stopping**: enabled with a patience of 2 epochs

## Inference platform
This model is used by the [CAP Babel Machine](https://babel.poltextlab.com), an open-source and free natural language processing tool, designed to simplify and speed up projects for comparative research.  

## Cooperation
Model performance can be significantly improved by extending our training sets. We appreciate every submission of CAP-coded corpora (of any domain and language) at poltextlab{at}poltextlab{dot}com or by using the [CAP Babel Machine](https://babel.poltextlab.com).

## Reference
Sebők, M., Máté, Á., Ring, O., Kovács, V., & Lehoczki, R. (2024). Leveraging Open Large Language Models for Multilingual Policy Topic Classification: The Babel Machine Approach. Social Science Computer Review, 0(0). https://doi.org/10.1177/08944393241259434

## Debugging and issues
This architecture uses the `sentencepiece` tokenizer. In order to use the model before `transformers==4.27` you need to install it manually.

If you encounter a `RuntimeError` when loading the model using the `from_pretrained()` method, adding `ignore_mismatched_sizes=True` should solve the issue.