MoritzLaurer HF staff commited on
Commit
290cdf1
1 Parent(s): ec9b89c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +24 -2
README.md CHANGED
@@ -1,13 +1,31 @@
 
 
1
  ---
2
  language:
 
3
  - en
4
  tags:
5
- - text-classification
6
  - zero-shot-classification
 
 
 
7
  metrics:
8
  - accuracy
 
 
 
 
 
9
  pipeline_tag: zero-shot-classification
10
-
 
 
 
 
 
 
 
 
11
  ---
12
  # Multilingual mDeBERTa-v3-base-mnli-xnli
13
  ## Model description
@@ -51,7 +69,11 @@ training_args = TrainingArguments(
51
  ### Eval results
52
  The model was evaluated using the matched test set and achieves 0.90 accuracy.
53
 
 
 
 
54
 
 
55
 
56
  ## Limitations and bias
57
  Please consult the original DeBERTa-V3 paper and literature on different NLI datasets for potential biases.
 
1
+ ---
2
+
3
  ---
4
  language:
5
+ - multilingual
6
  - en
7
  tags:
 
8
  - zero-shot-classification
9
+ - text-classification
10
+ - nli
11
+ - pytorch
12
  metrics:
13
  - accuracy
14
+ datasets:
15
+ - mnli
16
+ - xnli
17
+ - anli
18
+ license: mit
19
  pipeline_tag: zero-shot-classification
20
+ widget:
21
+ - text: "De pugna erat fantastic. Nam Crixo decem quam dilexit et praeciderunt caput aemulus."
22
+ candidate_labels: "violent, peaceful"
23
+ - text: "La película empezaba bien pero terminó siendo un desastre."
24
+ candidate_labels: "positivo, negativo, neutral"
25
+ - text: "La película empezó siendo un desastre pero en general fue bien."
26
+ candidate_labels: "positivo, negativo, neutral"
27
+ - text: "¿A quién vas a votar en 2020?"
28
+ candidate_labels: "Europa, elecciones, política, ciencia, deportes"
29
  ---
30
  # Multilingual mDeBERTa-v3-base-mnli-xnli
31
  ## Model description
 
69
  ### Eval results
70
  The model was evaluated using the matched test set and achieves 0.90 accuracy.
71
 
72
+ average | ar | bg | de | el | en | es | fr | hi | ru | sw | th | tr | ur | vu | zh
73
+ ---------|----------|---------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------
74
+ 0.808 | 0.802 | 0.829 | 0.825 | 0.826 | 0.883 | 0.845 | 0.834 | 0.771 | 0.813 | 0.748 | 0.793 | 0.807 | 0.740 | 0.795 | 0.8116
75
 
76
+ {'ar': 0.8017964071856287, 'bg': 0.8287425149700599, 'de': 0.8253493013972056, 'el': 0.8267465069860279, 'en': 0.8830339321357286, 'es': 0.8449101796407186, 'fr': 0.8343313373253493, 'hi': 0.7712574850299401, 'ru': 0.8127744510978044, 'sw': 0.7483033932135729, 'th': 0.792814371257485, 'tr': 0.8065868263473054, 'ur': 0.7403193612774451, 'vi': 0.7954091816367266, 'zh': 0.8115768463073852}
77
 
78
  ## Limitations and bias
79
  Please consult the original DeBERTa-V3 paper and literature on different NLI datasets for potential biases.