Fill-Mask
Transformers
PyTorch
4 languages
xlm-roberta
Inference Endpoints
5roop commited on
Commit
c0e2471
1 Parent(s): 0da0849

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +71 -7
README.md CHANGED
@@ -5,10 +5,12 @@ language:
5
  - sl
6
  - bs
7
  - sr
 
 
8
  ---
9
  # XLM-R-SloBertić
10
 
11
- This model was produced by pre-training [XLM-Roberta-large](https://huggingface.co/xlm-roberta-large) 48k steps on South Slavic languages.
12
 
13
  # Benchmarking
14
  Three tasks were chosen for model evaluation:
@@ -18,8 +20,47 @@ Three tasks were chosen for model evaluation:
18
 
19
 
20
  In all cases, this model was finetuned for specific downstream tasks.
 
21
  ## NER
22
- (entry to be added soon)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23
  ## Sentiment regression
24
 
25
  [ParlaSent dataset](https://huggingface.co/datasets/classla/ParlaSent) was used to evaluate sentiment regression for Bosnian, Croatian, and Serbian languages.
@@ -29,14 +70,37 @@ The procedure is explained in greater detail in the dedicated [benchmarking repo
29
  |:-----------------------------------------------------------------------|:--------------------|:-------------------------|------:|
30
  | [xlm-r-parlasent](https://huggingface.co/classla/xlm-r-parlasent) | ParlaSent_BCS.jsonl | ParlaSent_BCS_test.jsonl | 0.615 |
31
  | [BERTić](https://huggingface.co/classla/bcms-bertic) | ParlaSent_BCS.jsonl | ParlaSent_BCS_test.jsonl | 0.612 |
32
- | **XLM-R-SloBERTić ** | ParlaSent_BCS.jsonl | ParlaSent_BCS_test.jsonl | 0.607 |
33
- | XLM-Roberta-Large | ParlaSent_BCS.jsonl | ParlaSent_BCS_test.jsonl | 0.605 |
34
- | XLM-R-BERTić | ParlaSent_BCS.jsonl | ParlaSent_BCS_test.jsonl | 0.601 |
35
  | [crosloengual-bert](https://huggingface.co/EMBEDDIA/crosloengual-bert) | ParlaSent_BCS.jsonl | ParlaSent_BCS_test.jsonl | 0.537 |
36
- | XLM-Roberta-Base | ParlaSent_BCS.jsonl | ParlaSent_BCS_test.jsonl | 0.500 |
37
  | dummy (mean) | ParlaSent_BCS.jsonl | ParlaSent_BCS_test.jsonl | -0.12 |
 
 
38
  ## COPA
39
- (to be added soon)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
40
 
41
  # Citation
42
  (to be added soon)
 
5
  - sl
6
  - bs
7
  - sr
8
+ datasets:
9
+ - classla/xlm-r-bertic-data
10
  ---
11
  # XLM-R-SloBertić
12
 
13
+ This model was produced by pre-training [XLM-Roberta-large](https://huggingface.co/xlm-roberta-large) 48k steps on South Slavic languages using [XLM-R-BERTić dataset](https://huggingface.co/datasets/classla/xlm-r-bertic-data)
14
 
15
  # Benchmarking
16
  Three tasks were chosen for model evaluation:
 
20
 
21
 
22
  In all cases, this model was finetuned for specific downstream tasks.
23
+
24
  ## NER
25
+
26
+ Mean F1 scores were used to evaluate performance. Datasets used: [hr500k](https://huggingface.co/datasets/classla/hr500k), [ReLDI-sr](https://huggingface.co/datasets/classla/reldi_sr), [ReLDI-hr](https://huggingface.co/datasets/classla/reldi_hr), and [SETimes.SR](https://huggingface.co/datasets/classla/setimes_sr).
27
+
28
+ | system | dataset | F1 score |
29
+ |:-----------------------------------------------------------------------|:--------|---------:|
30
+ | [XLM-R-BERTić](https://huggingface.co/classla/xlm-r-bertic) | hr500k | 0.927 |
31
+ | [BERTić](https://huggingface.co/classla/bcms-bertic) | hr500k | 0.925 |
32
+ | [**XLM-R-SloBERTić**](https://huggingface.co/classla/xlm-r-slobertic) | hr500k | 0.923 |
33
+ | [XLM-Roberta-Large](https://huggingface.co/xlm-roberta-large) | hr500k | 0.919 |
34
+ | [crosloengual-bert](https://huggingface.co/EMBEDDIA/crosloengual-bert) | hr500k | 0.918 |
35
+ | [XLM-Roberta-Base](https://huggingface.co/xlm-roberta-base) | hr500k | 0.903 |
36
+
37
+ | system | dataset | F1 score |
38
+ |:-----------------------------------------------------------------------|:---------|---------:|
39
+ | [**XLM-R-SloBERTić**](https://huggingface.co/classla/xlm-r-slobertic) | ReLDI-hr | 0.812 |
40
+ | [XLM-R-BERTić](https://huggingface.co/classla/xlm-r-bertic) | ReLDI-hr | 0.809 |
41
+ | [crosloengual-bert](https://huggingface.co/EMBEDDIA/crosloengual-bert) | ReLDI-hr | 0.794 |
42
+ | [BERTić](https://huggingface.co/classla/bcms-bertic) | ReLDI-hr | 0.792 |
43
+ | [XLM-Roberta-Large](https://huggingface.co/xlm-roberta-large) | ReLDI-hr | 0.791 |
44
+ | [XLM-Roberta-Base](https://huggingface.co/xlm-roberta-base) | ReLDI-hr | 0.763 |
45
+
46
+ | system | dataset | F1 score |
47
+ |:-----------------------------------------------------------------------|:-----------|---------:|
48
+ | [**XLM-R-SloBERTić**](https://huggingface.co/classla/xlm-r-slobertic) | SETimes.SR | 0.949 |
49
+ | [XLM-R-BERTić](https://huggingface.co/classla/xlm-r-bertic) | SETimes.SR | 0.940 |
50
+ | [BERTić](https://huggingface.co/classla/bcms-bertic) | SETimes.SR | 0.936 |
51
+ | [XLM-Roberta-Large](https://huggingface.co/xlm-roberta-large) | SETimes.SR | 0.933 |
52
+ | [crosloengual-bert](https://huggingface.co/EMBEDDIA/crosloengual-bert) | SETimes.SR | 0.922 |
53
+ | [XLM-Roberta-Base](https://huggingface.co/xlm-roberta-base) | SETimes.SR | 0.914 |
54
+
55
+ | system | dataset | F1 score |
56
+ |:-----------------------------------------------------------------------|:---------|---------:|
57
+ | [XLM-R-BERTić](https://huggingface.co/classla/xlm-r-bertic) | ReLDI-sr | 0.841 |
58
+ | [**XLM-R-SloBERTić**](https://huggingface.co/classla/xlm-r-slobertic) | ReLDI-sr | 0.824 |
59
+ | [BERTić](https://huggingface.co/classla/bcms-bertic) | ReLDI-sr | 0.798 |
60
+ | [XLM-Roberta-Large](https://huggingface.co/xlm-roberta-large) | ReLDI-sr | 0.774 |
61
+ | [crosloengual-bert](https://huggingface.co/EMBEDDIA/crosloengual-bert) | ReLDI-sr | 0.751 |
62
+ | [XLM-Roberta-Base](https://huggingface.co/xlm-roberta-base) | ReLDI-sr | 0.734 |
63
+
64
  ## Sentiment regression
65
 
66
  [ParlaSent dataset](https://huggingface.co/datasets/classla/ParlaSent) was used to evaluate sentiment regression for Bosnian, Croatian, and Serbian languages.
 
70
  |:-----------------------------------------------------------------------|:--------------------|:-------------------------|------:|
71
  | [xlm-r-parlasent](https://huggingface.co/classla/xlm-r-parlasent) | ParlaSent_BCS.jsonl | ParlaSent_BCS_test.jsonl | 0.615 |
72
  | [BERTić](https://huggingface.co/classla/bcms-bertic) | ParlaSent_BCS.jsonl | ParlaSent_BCS_test.jsonl | 0.612 |
73
+ | [**XLM-R-SloBERTić**](https://huggingface.co/classla/xlm-r-slobertic) | ParlaSent_BCS.jsonl | ParlaSent_BCS_test.jsonl | 0.607 |
74
+ | [XLM-Roberta-Large](https://huggingface.co/xlm-roberta-large) | ParlaSent_BCS.jsonl | ParlaSent_BCS_test.jsonl | 0.605 |
75
+ | [XLM-R-BERTić](https://huggingface.co/classla/xlm-r-bertic) | ParlaSent_BCS.jsonl | ParlaSent_BCS_test.jsonl | 0.601 |
76
  | [crosloengual-bert](https://huggingface.co/EMBEDDIA/crosloengual-bert) | ParlaSent_BCS.jsonl | ParlaSent_BCS_test.jsonl | 0.537 |
77
+ | [XLM-Roberta-Base](https://huggingface.co/xlm-roberta-base) | ParlaSent_BCS.jsonl | ParlaSent_BCS_test.jsonl | 0.500 |
78
  | dummy (mean) | ParlaSent_BCS.jsonl | ParlaSent_BCS_test.jsonl | -0.12 |
79
+
80
+
81
  ## COPA
82
+
83
+
84
+ | system | dataset | Accuracy score |
85
+ |:-----------------------------------------------------------------------|:--------|---------------:|
86
+ | [BERTić](https://huggingface.co/classla/bcms-bertic) | Copa-SR | 0.689 |
87
+ | [**XLM-R-SloBERTić**](https://huggingface.co/classla/xlm-r-slobertic) | Copa-SR | 0.665 |
88
+ | [XLM-R-BERTić](https://huggingface.co/classla/xlm-r-bertic) | Copa-SR | 0.637 |
89
+ | [crosloengual-bert](https://huggingface.co/EMBEDDIA/crosloengual-bert) | Copa-SR | 0.607 |
90
+ | [XLM-Roberta-Base](https://huggingface.co/xlm-roberta-base) | Copa-SR | 0.573 |
91
+ | [XLM-Roberta-Large](https://huggingface.co/xlm-roberta-large) | Copa-SR | 0.570 |
92
+
93
+
94
+ | system | dataset | Accuracy score |
95
+ |:-----------------------------------------------------------------------|:--------|---------------:|
96
+ | [BERTić](https://huggingface.co/classla/bcms-bertic) | Copa-HR | 0.669 |
97
+ | [**XLM-R-SloBERTić**](https://huggingface.co/classla/xlm-r-slobertic) | Copa-HR | 0.628 |
98
+ | [XLM-R-BERTić](https://huggingface.co/classla/xlm-r-bertic) | Copa-HR | 0.635 |
99
+ | [crosloengual-bert](https://huggingface.co/EMBEDDIA/crosloengual-bert) | Copa-HR | 0.669 |
100
+ | [XLM-Roberta-Base](https://huggingface.co/xlm-roberta-base) | Copa-HR | 0.585 |
101
+ | [XLM-Roberta-Large](https://huggingface.co/xlm-roberta-large) | Copa-HR | 0.571 |
102
+
103
+
104
 
105
  # Citation
106
  (to be added soon)