Edit model card

German FinBERT For QuAD (Further Pre-trained Version, Fine-Tuned for Financial Question Answering)

Alt text for the image

German FinBERT is a BERT language model focusing on the financial domain within the German language. In my paper, I describe in more detail the steps taken to train the model and show that it outperforms its generic benchmarks for finance specific downstream tasks.

This model is the further-pretrained version of German FinBERT, after fine-tuning on the German Ad-Hoc QuAD dataset.

Overview

Author Moritz Scherrmann Paper: here
Architecture: BERT base Language: German
Specialization: Financial question answering Base model: German_FinBert_FP

Fine-tuning

I fine-tune the model using the 1cycle policy of Smith and Topin (2019). I use the Adam optimization method of Kingma and Ba (2014) with standard parameters.I run a grid search on the evaluation set to find the best hyper-parameter setup. I test different values for learning rate, batch size and number of epochs, following the suggestions of Chalkidis et al. (2020). I repeat the fine-tuning for each setup five times with different seeds, to avoid getting good results by chance. After finding the best model w.r.t the evaluation set, I report the mean result across seeds for that model on the test set.

Results

Ad-Hoc QuAD (Question Answering):

  • Exact Match (EM): 52.50%
  • F1 Score: 74.61%

Authors

Moritz Scherrmann: scherrmann [at] lmu.de

For additional details regarding the performance on fine-tune datasets and benchmark results, please refer to the full documentation provided in the study.

See also:

  • scherrmann/GermanFinBERT_SC
  • scherrmann/GermanFinBERT_FP
  • scherrmann/GermanFinBERT_SC_Sentiment
Downloads last month
4
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.