File size: 4,576 Bytes
3067ef1
 
 
 
5a14c4a
3067ef1
 
 
 
 
 
 
7aa114f
3067ef1
ec69480
3067ef1
 
5a14c4a
3067ef1
5a14c4a
 
3067ef1
5a14c4a
 
3067ef1
5a14c4a
3067ef1
5a14c4a
 
 
 
 
 
 
 
 
 
 
 
 
 
3067ef1
 
 
5a14c4a
3067ef1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5a14c4a
 
 
 
 
 
 
 
3067ef1
 
 
 
 
 
5a14c4a
 
 
3067ef1
5a14c4a
3067ef1
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94

---
---
language:
- da
tags:
- text-classification
- pytorch
metrics:
- accuracy
- f1-score
---
# xlm-roberta-large-danish-parlspeech-cap-v3
## Model description
An `xlm-roberta-large` model fine-tuned on danish training data containing parliamentary speeches (oral questions, interpellations, bill debates, other plenary speeches, urgent questions) labeled with [major topic codes](https://www.comparativeagendas.net/pages/master-codebook) from the [Comparative Agendas Project](https://www.comparativeagendas.net/).

## How to use the model
This snippet prints the three most probable labels and their corresponding softmax scores:
```python
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer

model = AutoModelForSequenceClassification.from_pretrained("poltextlab/xlm-roberta-large-danish-parlspeech-cap-v3")
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-large")

sentence = "This is an example."

inputs = tokenizer(sentence,
                   return_tensors="pt",
                   max_length=512,
                   padding="do_not_pad",
                   truncation=True
                   )

logits = model(**inputs).logits

probs = torch.softmax(logits, dim=1).tolist()[0]
probs = {model.config.id2label[index]: round(probability, 2) for index, probability in enumerate(probs)}
top3_probs = dict(sorted(probs.items(), key=lambda item: item[1], reverse=True)[:3])

print(top3_probs)
```

## Model performance
The model was evaluated on a test set of 44159 examples.<br>
Model accuracy is **0.94**.
| label        |   precision |   recall |   f1-score |   support |
|:-------------|------------:|---------:|-----------:|----------:|
| 0            |        0.91 |     0.92 |       0.92 |      2310 |
| 1            |        0.9  |     0.9  |       0.9  |      1285 |
| 2            |        0.98 |     0.96 |       0.97 |      3400 |
| 3            |        0.95 |     0.95 |       0.95 |      1972 |
| 4            |        0.92 |     0.93 |       0.93 |      2679 |
| 5            |        0.96 |     0.96 |       0.96 |      2778 |
| 6            |        0.94 |     0.94 |       0.94 |      2458 |
| 7            |        0.96 |     0.94 |       0.95 |      1173 |
| 8            |        0.95 |     0.96 |       0.96 |      1948 |
| 9            |        0.95 |     0.97 |       0.96 |      3276 |
| 10           |        0.94 |     0.95 |       0.94 |      3224 |
| 11           |        0.92 |     0.93 |       0.93 |      2270 |
| 12           |        0.94 |     0.93 |       0.93 |      1510 |
| 13           |        0.89 |     0.89 |       0.89 |      1759 |
| 14           |        0.96 |     0.95 |       0.95 |      1941 |
| 15           |        0.95 |     0.93 |       0.94 |      1343 |
| 16           |        0.89 |     0.9  |       0.9  |       402 |
| 17           |        0.95 |     0.94 |       0.95 |      3337 |
| 18           |        0.92 |     0.92 |       0.92 |      3484 |
| 19           |        0.95 |     0.95 |       0.95 |       834 |
| 20           |        0.93 |     0.91 |       0.92 |       776 |
| macro avg    |        0.94 |     0.93 |       0.94 |     44159 |
| weighted avg |        0.94 |     0.94 |       0.94 |     44159 |

### Fine-tuning procedure
This model was fine-tuned with the following key hyperparameters:

- **Number of Training Epochs**: 10
- **Batch Size**: 8
- **Learning Rate**: 5e-06
- **Early Stopping**: enabled with a patience of 2 epochs

## Inference platform
This model is used by the [CAP Babel Machine](https://babel.poltextlab.com), an open-source and free natural language processing tool, designed to simplify and speed up projects for comparative research.  

## Cooperation
Model performance can be significantly improved by extending our training sets. We appreciate every submission of CAP-coded corpora (of any domain and language) at poltextlab{at}poltextlab{dot}com or by using the [CAP Babel Machine](https://babel.poltextlab.com).

## Reference
Sebők, M., Máté, Á., Ring, O., Kovács, V., & Lehoczki, R. (2024). Leveraging Open Large Language Models for Multilingual Policy Topic Classification: The Babel Machine Approach. Social Science Computer Review, 0(0). https://doi.org/10.1177/08944393241259434

## Debugging and issues
This architecture uses the `sentencepiece` tokenizer. In order to use the model before `transformers==4.27` you need to install it manually.

If you encounter a `RuntimeError` when loading the model using the `from_pretrained()` method, adding `ignore_mismatched_sizes=True` should solve the issue.