Edit model card


Language model: gbert-large-sts

Language: German
Training data: German STS benchmark train and dev set
Eval data: German STS benchmark test set
Infrastructure: 1x V100 GPU
Published: August 12th, 2021


  • We trained a gbert-large model on the task of estimating semantic similarity of German-language text pairs. The dataset is a machine-translated version of the STS benchmark, which is available here.


batch_size = 16
n_epochs = 4
warmup_ratio = 0.1
learning_rate = 2e-5
lr_schedule = LinearWarmup


Stay tuned... and watch out for new papers on arxiv.org ;)


  • Julian Risch: julian.risch [at] deepset.ai
  • Timo Möller: timo.moeller [at] deepset.ai
  • Julian Gutsch: julian.gutsch [at] deepset.ai
  • Malte Pietsch: malte.pietsch [at] deepset.ai

About us

deepset logo We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.

Some of our work:

Get in touch: Twitter | LinkedIn | Website

By the way: we're hiring!

Downloads last month
Hosted inference API
This model can be loaded on the Inference API on-demand.