File size: 6,378 Bytes
ba520b2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
---
license: mit
base_model: xlm-roberta-large
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-large_ALL_BCE_translated_data_multihead_19_shuffled_special_tokens_val
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# xlm-roberta-large_ALL_BCE_translated_data_multihead_19_shuffled_special_tokens_val

This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8461
- F1 Macro 0.1: 0.0910
- F1 Macro 0.15: 0.1188
- F1 Macro 0.2: 0.1445
- F1 Macro 0.25: 0.1675
- F1 Macro 0.3: 0.1890
- F1 Macro 0.35: 0.2092
- F1 Macro 0.4: 0.2277
- F1 Macro 0.45: 0.2467
- F1 Macro 0.5: 0.2641
- F1 Macro 0.55: 0.2816
- F1 Macro 0.6: 0.2976
- F1 Macro 0.65: 0.3120
- F1 Macro 0.7: 0.3274
- F1 Macro 0.75: 0.3429
- F1 Macro 0.8: 0.3559
- F1 Macro 0.85: 0.3640
- F1 Macro 0.9: 0.3511
- F1 Macro 0.95: 0.2837
- Threshold 0: 0.8
- Threshold 1: 0.8
- Threshold 2: 0.9
- Threshold 3: 0.9
- Threshold 4: 0.8
- Threshold 5: 0.85
- Threshold 6: 0.75
- Threshold 7: 0.85
- Threshold 8: 0.8
- Threshold 9: 0.75
- Threshold 10: 0.9
- Threshold 11: 0.85
- Threshold 12: 0.9
- Threshold 13: 0.85
- Threshold 14: 0.9
- Threshold 15: 0.9
- Threshold 16: 0.85
- Threshold 17: 0.8
- Threshold 18: 0.9
- 0: 0.1473
- 1: 0.2629
- 2: 0.3389
- 3: 0.2821
- 4: 0.4463
- 5: 0.4627
- 6: 0.4396
- 7: 0.3159
- 8: 0.3574
- 9: 0.5352
- 10: 0.5231
- 11: 0.5417
- 12: 0.2511
- 13: 0.1600
- 14: 0.3940
- 15: 0.3114
- 16: 0.4335
- 17: 0.6149
- 18: 0.2350
- Max F1: 0.3640
- Mean F1: 0.3712

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 2024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
- mixed_precision_training: Native AMP

### Training results

| Training Loss | Epoch | Step  | Validation Loss | F1 Macro 0.1 | F1 Macro 0.15 | F1 Macro 0.2 | F1 Macro 0.25 | F1 Macro 0.3 | F1 Macro 0.35 | F1 Macro 0.4 | F1 Macro 0.45 | F1 Macro 0.5 | F1 Macro 0.55 | F1 Macro 0.6 | F1 Macro 0.65 | F1 Macro 0.7 | F1 Macro 0.75 | F1 Macro 0.8 | F1 Macro 0.85 | F1 Macro 0.9 | F1 Macro 0.95 | Threshold 0 | Threshold 1 | Threshold 2 | Threshold 3 | Threshold 4 | Threshold 5 | Threshold 6 | Threshold 7 | Threshold 8 | Threshold 9 | Threshold 10 | Threshold 11 | Threshold 12 | Threshold 13 | Threshold 14 | Threshold 15 | Threshold 16 | Threshold 17 | Threshold 18 | 0      | 1      | 2      | 3      | 4      | 5      | 6      | 7      | 8      | 9      | 10     | 11     | 12     | 13     | 14     | 15     | 16     | 17     | 18     | Max F1 | Mean F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------------:|:-------------:|:------------:|:-------------:|:------------:|:-------------:|:------------:|:-------------:|:------------:|:-------------:|:------------:|:-------------:|:------------:|:-------------:|:------------:|:-------------:|:------------:|:-------------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:------------:|:------------:|:------------:|:------------:|:------------:|:------------:|:------------:|:------------:|:------------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:-------:|
| 1.2584        | 1.0   | 5595  | 0.9674          | 0.0635       | 0.0727        | 0.0841       | 0.0965        | 0.1097       | 0.1230        | 0.1373       | 0.1527        | 0.1683       | 0.1848        | 0.2014       | 0.2169        | 0.2332       | 0.2464        | 0.2560       | 0.2481        | 0.2156       | 0.1188        | 0.6         | 0.75        | 0.85        | 0.85        | 0.75        | 0.75        | 0.85        | 0.85        | 0.8         | 0.75        | 0.9          | 0.8          | 0.9          | 0.9          | 0.9          | 0.85         | 0.85         | 0.95         | 0.85         | 0.0616 | 0.1640 | 0.2570 | 0.1316 | 0.3160 | 0.3605 | 0.3737 | 0.1615 | 0.2401 | 0.4683 | 0.3648 | 0.4699 | 0.1921 | 0.1682 | 0.2660 | 0.1916 | 0.3389 | 0.5671 | 0.1519 | 0.2560 | 0.2760  |
| 0.8825        | 2.0   | 11190 | 0.8752          | 0.0809       | 0.1031        | 0.1250       | 0.1462        | 0.1660       | 0.1849        | 0.2037       | 0.2222        | 0.2378       | 0.2545        | 0.2731       | 0.2904        | 0.3086       | 0.3270        | 0.3385       | 0.3475        | 0.3283       | 0.2515        | 0.85        | 0.8         | 0.85        | 0.85        | 0.8         | 0.85        | 0.85        | 0.85        | 0.8         | 0.75        | 0.9          | 0.85         | 0.85         | 0.75         | 0.9          | 0.9          | 0.85         | 0.85         | 0.9          | 0.1324 | 0.2436 | 0.3250 | 0.2624 | 0.4264 | 0.4417 | 0.4149 | 0.2962 | 0.3592 | 0.5202 | 0.5151 | 0.5290 | 0.2303 | 0.1584 | 0.3684 | 0.2897 | 0.4175 | 0.6251 | 0.2145 | 0.3475 | 0.3563  |
| 0.7223        | 3.0   | 16785 | 0.8461          | 0.0910       | 0.1188        | 0.1445       | 0.1675        | 0.1890       | 0.2092        | 0.2277       | 0.2467        | 0.2641       | 0.2816        | 0.2976       | 0.3120        | 0.3274       | 0.3429        | 0.3559       | 0.3640        | 0.3511       | 0.2837        | 0.8         | 0.8         | 0.9         | 0.9         | 0.8         | 0.85        | 0.75        | 0.85        | 0.8         | 0.75        | 0.9          | 0.85         | 0.9          | 0.85         | 0.9          | 0.9          | 0.85         | 0.8          | 0.9          | 0.1473 | 0.2629 | 0.3389 | 0.2821 | 0.4463 | 0.4627 | 0.4396 | 0.3159 | 0.3574 | 0.5352 | 0.5231 | 0.5417 | 0.2511 | 0.1600 | 0.3940 | 0.3114 | 0.4335 | 0.6149 | 0.2350 | 0.3640 | 0.3712  |


### Framework versions

- Transformers 4.36.1
- Pytorch 2.1.0+cu121
- Datasets 2.13.1
- Tokenizers 0.15.0