MichelBartelsDeepset
commited on
Commit
•
26cd7db
1
Parent(s):
75e1ff4
Update README.md
Browse files
README.md
CHANGED
@@ -22,7 +22,7 @@ tags:
|
|
22 |
- We trained a German question answering model with a gelectra-base model as its basis.
|
23 |
- The dataset is GermanQuAD, a new, German language dataset, which we hand-annotated and published [online](https://deepset.ai/germanquad).
|
24 |
- The training dataset is one-way annotated and contains 11518 questions and 11518 answers, while the test dataset is three-way annotated so that there are 2204 questions and with 2204·3−76 = 6536answers, because we removed 76 wrong answers.
|
25 |
-
- In addition to the annotations in GermanQuAD, haystack's distillation feature was used for training. deepset/xlm-roberta-large-squad2
|
26 |
|
27 |
See https://deepset.ai/germanquad for more details and dataset download in SQuAD format.
|
28 |
|
|
|
22 |
- We trained a German question answering model with a gelectra-base model as its basis.
|
23 |
- The dataset is GermanQuAD, a new, German language dataset, which we hand-annotated and published [online](https://deepset.ai/germanquad).
|
24 |
- The training dataset is one-way annotated and contains 11518 questions and 11518 answers, while the test dataset is three-way annotated so that there are 2204 questions and with 2204·3−76 = 6536answers, because we removed 76 wrong answers.
|
25 |
+
- In addition to the annotations in GermanQuAD, haystack's distillation feature was used for training. deepset/xlm-roberta-large-squad2 was used as the teacher model.
|
26 |
|
27 |
See https://deepset.ai/germanquad for more details and dataset download in SQuAD format.
|
28 |
|