Dehnes commited on
Commit
d161770
1 Parent(s): cf5a934

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -7
README.md CHANGED
@@ -17,7 +17,7 @@ pipeline_tag: zero-shot-classification
17
 
18
  widget:
19
 
20
- - text: "Ich habe ein Problem mit meinem Iphone, das so schnell wie möglich gelöst werden muss."
21
 
22
  candidate_labels: "Computer, Handy, Tablet, dringend, nicht dringend"
23
 
@@ -29,9 +29,9 @@ widget:
29
 
30
  In this repository, we present our german zeroshot model.
31
 
32
- This model was trained on the basis of the German BERT large model from [deepset.ai](https://huggingface.co/deepset/gbert-large) and finetuned for natural language inference task based on 847.862 machine-translated nli sentence pairs, using the [mnli](https://huggingface.co/datasets/multi_nli), [anli](https://huggingface.co/datasets/anli) and [snli](https://huggingface.co/datasets/snli) datasets.
33
 
34
- For this purpose, we translated the [MSMARCO-Passage-Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) dataset using the [fairseq-wmt19-en-de](https://github.com/pytorch/fairseq/tree/master/examples/wmt19) translation model.
35
 
36
  ### Model Details
37
 
@@ -39,13 +39,13 @@ For this purpose, we translated the [MSMARCO-Passage-Ranking](https://github.com
39
 
40
  |---|---|
41
 
42
- |**Base model** | [```german-nlp-group/electra-base-german-uncased```](https://huggingface.co/german-nlp-group/electra-base-german-uncased) |
43
 
44
- |**Finetuning task**| Passage Retrieval / Semantic Search |
 
 
45
 
46
- |**Source dataset**| [```MSMARCO-Passage-Ranking```](https://github.com/microsoft/MSMARCO-Passage-Ranking) |
47
 
48
- |**Translation model**| [```fairseq-wmt19-en-de```](https://github.com/pytorch/fairseq/tree/master/examples/wmt19) |
49
 
50
  DESCRIPTION GOES HERE:
51
  Satz 1:
 
17
 
18
  widget:
19
 
20
+ - text: "Ich habe ein Problem mit meinem Iphone das so schnell wie möglich gelöst werden muss."
21
 
22
  candidate_labels: "Computer, Handy, Tablet, dringend, nicht dringend"
23
 
 
29
 
30
  In this repository, we present our german zeroshot model.
31
 
32
+ This model was trained on the basis of the German BERT large model from [deepset.ai](https://huggingface.co/deepset/gbert-large) and finetuned for natural language inference based on 847.862 machine-translated nli sentence pairs, using the [mnli](https://huggingface.co/datasets/multi_nli), [anli](https://huggingface.co/datasets/anli) and [snli](https://huggingface.co/datasets/snli) datasets.
33
 
34
+ For this purpose, we translated the sentence pairs in these dataset to German.
35
 
36
  ### Model Details
37
 
 
39
 
40
  |---|---|
41
 
42
+ |**Base model** | [```gbert-large```](https://huggingface.co/deepset/gbert-large) |
43
 
44
+ |**Finetuning task**| Text Pair Classification / Natural Language Inference |
45
+
46
+ |**Source dataset**| [```mnli```](https://huggingface.co/datasets/multi_nli) ; [```anli```](https://huggingface.co/datasets/anli) ; [```snli```](https://huggingface.co/datasets/snli) |
47
 
 
48
 
 
49
 
50
  DESCRIPTION GOES HERE:
51
  Satz 1: