julianrisch commited on
Commit
81c1c3d
1 Parent(s): 672c1d7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -1,5 +1,7 @@
1
  ---
2
  language: de
 
 
3
  license: mit
4
  thumbnail: https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg
5
  tags:
@@ -36,7 +38,7 @@ embeds_dropout_prob = 0.1
36
  We evaluated the extractive question answering performance on our GermanQuAD test set.
37
  Model types and training data are included in the model name.
38
  For finetuning XLM-Roberta, we use the English SQuAD v2.0 dataset.
39
- The GELECTRA models are warm started on the German translation of SQuAD v1.1 and finetuned on \\germanquad.
40
  The human baseline was computed for the 3-way test set by taking one answer as prediction and the other two as ground truth.
41
  ![performancetable](https://lh3.google.com/u/0/d/1IFqkq8OZ7TFnGzxmW6eoxXSYa12f2M7O=w1970-h1546-iv1)
42
 
 
1
  ---
2
  language: de
3
+ datasets:
4
+ - deepset/germanquad
5
  license: mit
6
  thumbnail: https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg
7
  tags:
 
38
  We evaluated the extractive question answering performance on our GermanQuAD test set.
39
  Model types and training data are included in the model name.
40
  For finetuning XLM-Roberta, we use the English SQuAD v2.0 dataset.
41
+ The GELECTRA models are warm started on the German translation of SQuAD v1.1 and finetuned on \\\\germanquad.
42
  The human baseline was computed for the 3-way test set by taking one answer as prediction and the other two as ground truth.
43
  ![performancetable](https://lh3.google.com/u/0/d/1IFqkq8OZ7TFnGzxmW6eoxXSYa12f2M7O=w1970-h1546-iv1)
44