File size: 4,496 Bytes
bdd94ab
 
 
 
 
 
 
 
 
 
 
 
36fe3b2
bdd94ab
90315a9
bdd94ab
 
90315a9
bdd94ab
90315a9
 
bdd94ab
90315a9
 
bdd94ab
90315a9
bdd94ab
90315a9
 
 
 
 
 
 
 
 
 
 
 
 
 
bdd94ab
 
 
90315a9
bdd94ab
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
90315a9
 
 
 
 
 
 
 
bdd94ab
 
 
 
 
 
90315a9
 
 
bdd94ab
90315a9
bdd94ab
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95

---
---
language:
- fr
tags:
- text-classification
- pytorch
metrics:
- accuracy
- f1-score
---
# xlm-roberta-large-french-cap-v3
## Model description
An `xlm-roberta-large` model fine-tuned on french training data labeled with [major topic codes](https://www.comparativeagendas.net/pages/master-codebook) from the [Comparative Agendas Project](https://www.comparativeagendas.net/).

## How to use the model
This snippet prints the three most probable labels and their corresponding softmax scores:
```python
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer

model = AutoModelForSequenceClassification.from_pretrained("poltextlab/xlm-roberta-large-french-cap-v3")
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-large")

sentence = "This is an example."

inputs = tokenizer(sentence,
                   return_tensors="pt",
                   max_length=512,
                   padding="do_not_pad",
                   truncation=True
                   )

logits = model(**inputs).logits

probs = torch.softmax(logits, dim=1).tolist()[0]
probs = {model.config.id2label[index]: round(probability, 2) for index, probability in enumerate(probs)}
top3_probs = dict(sorted(probs.items(), key=lambda item: item[1], reverse=True)[:3])

print(top3_probs)
```

## Model performance
The model was evaluated on a test set of 2280 examples.<br>
Model accuracy is **0.71**.
| label        |   precision |   recall |   f1-score |   support |
|:-------------|------------:|---------:|-----------:|----------:|
| 0            |        0.71 |     0.72 |       0.71 |       200 |
| 1            |        0.59 |     0.44 |       0.5  |        62 |
| 2            |        0.82 |     0.74 |       0.78 |        80 |
| 3            |        0.66 |     0.75 |       0.7  |        64 |
| 4            |        0.72 |     0.57 |       0.63 |       186 |
| 5            |        0.75 |     0.76 |       0.76 |       125 |
| 6            |        0.7  |     0.6  |       0.65 |        85 |
| 7            |        0.88 |     0.82 |       0.85 |        45 |
| 8            |        0.7  |     0.74 |       0.72 |        57 |
| 9            |        0.74 |     0.86 |       0.79 |        58 |
| 10           |        0.82 |     0.77 |       0.8  |       154 |
| 11           |        0.55 |     0.65 |       0.59 |       105 |
| 12           |        0.76 |     0.64 |       0.7  |        87 |
| 13           |        0.58 |     0.59 |       0.59 |       106 |
| 14           |        0.8  |     0.8  |       0.8  |        87 |
| 15           |        0.7  |     0.72 |       0.71 |        46 |
| 16           |        0.57 |     0.71 |       0.63 |        59 |
| 17           |        0.64 |     0.79 |       0.71 |       204 |
| 18           |        0.78 |     0.78 |       0.78 |       359 |
| 19           |        0    |     0    |       0    |         7 |
| 20           |        0.76 |     0.7  |       0.73 |       104 |
| 21           |        0    |     0    |       0    |         0 |
| macro avg    |        0.65 |     0.64 |       0.64 |      2280 |
| weighted avg |        0.72 |     0.71 |       0.71 |      2280 |

### Fine-tuning procedure
This model was fine-tuned with the following key hyperparameters:

- **Number of Training Epochs**: 10
- **Batch Size**: 8
- **Learning Rate**: 5e-06
- **Early Stopping**: enabled with a patience of 2 epochs

## Inference platform
This model is used by the [CAP Babel Machine](https://babel.poltextlab.com), an open-source and free natural language processing tool, designed to simplify and speed up projects for comparative research.  

## Cooperation
Model performance can be significantly improved by extending our training sets. We appreciate every submission of CAP-coded corpora (of any domain and language) at poltextlab{at}poltextlab{dot}com or by using the [CAP Babel Machine](https://babel.poltextlab.com).

## Reference
Sebők, M., Máté, Á., Ring, O., Kovács, V., & Lehoczki, R. (2024). Leveraging Open Large Language Models for Multilingual Policy Topic Classification: The Babel Machine Approach. Social Science Computer Review, 0(0). https://doi.org/10.1177/08944393241259434

## Debugging and issues
This architecture uses the `sentencepiece` tokenizer. In order to use the model before `transformers==4.27` you need to install it manually.

If you encounter a `RuntimeError` when loading the model using the `from_pretrained()` method, adding `ignore_mismatched_sizes=True` should solve the issue.