poltextlab commited on
Commit
3ef1740
1 Parent(s): f0d397b

model card init

Browse files
Files changed (1) hide show
  1. README.md +35 -72
README.md CHANGED
@@ -1,11 +1,9 @@
1
 
2
  ---
3
  ---
4
- license: mit
5
  language:
6
- - multilingual
7
  tags:
8
- - zero-shot-classification
9
  - text-classification
10
  - pytorch
11
  metrics:
@@ -14,83 +12,37 @@ metrics:
14
  ---
15
  # xlm-roberta-large-spanish-legislative-cap-v3
16
  ## Model description
17
- An `xlm-roberta-large` model finetuned on multilingual training data containing texts of the `legislative` domain labelled with [major topic codes](https://www.comparativeagendas.net/pages/master-codebook) from the [Comparative Agendas Project](https://www.comparativeagendas.net/).
18
 
19
  ## How to use the model
20
- #### Loading and tokenizing input data
21
  ```python
22
- import pandas as pd
23
- import numpy as np
24
- from datasets import Dataset
25
- from transformers import (AutoModelForSequenceClassification, AutoTokenizer,
26
- Trainer, TrainingArguments)
27
-
28
- CAP_NUM_DICT = {0: '1', 1: '2', 2: '3', 3: '4', 4: '5', 5: '6',
29
- 6: '7', 7: '8', 8: '9', 9: '10', 10: '12', 11: '13', 12: '14',
30
- 13: '15', 14: '16', 15: '17', 16: '18', 17: '19', 18: '20', 19:
31
- '21', 20: '23', 21: '999'}
32
-
33
- tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-large')
34
- num_labels = len(CAP_NUM_DICT)
35
-
36
- def tokenize_dataset(data : pd.DataFrame):
37
- tokenized = tokenizer(data["text"],
38
- max_length=MAXLEN,
39
- truncation=True,
40
- padding="max_length")
41
- return tokenized
42
-
43
- hg_data = Dataset.from_pandas(data)
44
- dataset = hg_data.map(tokenize_dataset, batched=True, remove_columns=hg_data.column_names)
45
- ```
46
 
47
- #### Inference using the Trainer class
48
- ```python
49
- model = AutoModelForSequenceClassification.from_pretrained('poltextlab/xlm-roberta-large-spanish-legislative-cap-v3',
50
- num_labels=num_labels,
51
- problem_type="multi_label_classification",
52
- ignore_mismatched_sizes=True
53
- )
54
-
55
- training_args = TrainingArguments(
56
- output_dir='.',
57
- per_device_train_batch_size=8,
58
- per_device_eval_batch_size=8
59
- )
60
-
61
- trainer = Trainer(
62
- model=model,
63
- args=training_args
64
- )
65
-
66
- probs = trainer.predict(test_dataset=dataset).predictions
67
- predicted = pd.DataFrame(np.argmax(probs, axis=1)).replace({0: CAP_NUM_DICT}).rename(
68
- columns={0: 'predicted'}).reset_index(drop=True)
69
 
70
- ```
71
 
72
- ### Fine-tuning procedure
73
- `xlm-roberta-large-spanish-legislative-cap-v3` was fine-tuned using the Hugging Face Trainer class with the following hyperparameters:
74
- ```python
75
- training_args = TrainingArguments(
76
- output_dir=f"../model/{model_dir}/tmp/",
77
- logging_dir=f"../logs/{model_dir}/",
78
- logging_strategy='epoch',
79
- num_train_epochs=10,
80
- per_device_train_batch_size=8,
81
- per_device_eval_batch_size=8,
82
- learning_rate=5e-06,
83
- seed=42,
84
- save_strategy='epoch',
85
- evaluation_strategy='epoch',
86
- save_total_limit=1,
87
- load_best_model_at_end=True
88
- )
89
  ```
90
- We also incorporated an EarlyStoppingCallback in the process with a patience of 2 epochs.
91
 
92
  ## Model performance
93
- The model was evaluated on a test set of 1638 examples (10% of the available data).<br>
94
  Model accuracy is **0.85**.
95
  | label | precision | recall | f1-score | support |
96
  |:-------------|------------:|---------:|-----------:|----------:|
@@ -118,13 +70,24 @@ Model accuracy is **0.85**.
118
  | macro avg | 0.81 | 0.79 | 0.8 | 1638 |
119
  | weighted avg | 0.85 | 0.85 | 0.85 | 1638 |
120
 
 
 
 
 
 
 
 
 
121
  ## Inference platform
122
  This model is used by the [CAP Babel Machine](https://babel.poltextlab.com), an open-source and free natural language processing tool, designed to simplify and speed up projects for comparative research.
123
 
124
  ## Cooperation
125
  Model performance can be significantly improved by extending our training sets. We appreciate every submission of CAP-coded corpora (of any domain and language) at poltextlab{at}poltextlab{dot}com or by using the [CAP Babel Machine](https://babel.poltextlab.com).
126
 
 
 
 
127
  ## Debugging and issues
128
- This architecture uses the `sentencepiece` tokenizer. In order to run the model before `transformers==4.27` you need to install it manually.
129
 
130
  If you encounter a `RuntimeError` when loading the model using the `from_pretrained()` method, adding `ignore_mismatched_sizes=True` should solve the issue.
 
1
 
2
  ---
3
  ---
 
4
  language:
5
+ - es
6
  tags:
 
7
  - text-classification
8
  - pytorch
9
  metrics:
 
12
  ---
13
  # xlm-roberta-large-spanish-legislative-cap-v3
14
  ## Model description
15
+ An `xlm-roberta-large` model fine-tuned on spanish training data containing legislative documents (bills, laws, motions, legislative decrees, hearings, resolution, other) labeled with [major topic codes](https://www.comparativeagendas.net/pages/master-codebook) from the [Comparative Agendas Project](https://www.comparativeagendas.net/).
16
 
17
  ## How to use the model
18
+ This snippet prints the three most probable labels and their corresponding softmax scores:
19
  ```python
20
+ import torch
21
+ from transformers import AutoModelForSequenceClassification, AutoTokenizer
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22
 
23
+ model = AutoModelForSequenceClassification.from_pretrained("poltextlab/xlm-roberta-large-spanish-legislative-cap-v3")
24
+ tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-large")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25
 
26
+ sentence = "This is an example."
27
 
28
+ inputs = tokenizer(sentence,
29
+ return_tensors="pt",
30
+ max_length=512,
31
+ padding="do_not_pad",
32
+ truncation=True
33
+ )
34
+
35
+ logits = model(**inputs).logits
36
+
37
+ probs = torch.softmax(logits, dim=1).tolist()[0]
38
+ probs = {model.config.id2label[index]: round(probability, 2) for index, probability in enumerate(probs)}
39
+ top3_probs = dict(sorted(probs.items(), key=lambda item: item[1], reverse=True)[:3])
40
+
41
+ print(top3_probs)
 
 
 
42
  ```
 
43
 
44
  ## Model performance
45
+ The model was evaluated on a test set of 1638 examples.<br>
46
  Model accuracy is **0.85**.
47
  | label | precision | recall | f1-score | support |
48
  |:-------------|------------:|---------:|-----------:|----------:|
 
70
  | macro avg | 0.81 | 0.79 | 0.8 | 1638 |
71
  | weighted avg | 0.85 | 0.85 | 0.85 | 1638 |
72
 
73
+ ### Fine-tuning procedure
74
+ This model was fine-tuned with the following key hyperparameters:
75
+
76
+ - **Number of Training Epochs**: 10
77
+ - **Batch Size**: 8
78
+ - **Learning Rate**: 5e-06
79
+ - **Early Stopping**: enabled with a patience of 2 epochs
80
+
81
  ## Inference platform
82
  This model is used by the [CAP Babel Machine](https://babel.poltextlab.com), an open-source and free natural language processing tool, designed to simplify and speed up projects for comparative research.
83
 
84
  ## Cooperation
85
  Model performance can be significantly improved by extending our training sets. We appreciate every submission of CAP-coded corpora (of any domain and language) at poltextlab{at}poltextlab{dot}com or by using the [CAP Babel Machine](https://babel.poltextlab.com).
86
 
87
+ ## Reference
88
+ Sebők, M., Máté, Á., Ring, O., Kovács, V., & Lehoczki, R. (2024). Leveraging Open Large Language Models for Multilingual Policy Topic Classification: The Babel Machine Approach. Social Science Computer Review, 0(0). https://doi.org/10.1177/08944393241259434
89
+
90
  ## Debugging and issues
91
+ This architecture uses the `sentencepiece` tokenizer. In order to use the model before `transformers==4.27` you need to install it manually.
92
 
93
  If you encounter a `RuntimeError` when loading the model using the `from_pretrained()` method, adding `ignore_mismatched_sizes=True` should solve the issue.