--- language: - de license: mit datasets: - germaner metrics: - precision - recall - f1 - accuracy base_model: deepset/gbert-large model-index: - name: gbert-large-germaner results: - task: type: token-classification name: Token Classification dataset: name: germaner type: germaner args: default metrics: - type: precision value: 0.8693333333333333 name: precision - type: recall value: 0.885640362225097 name: recall - type: f1 value: 0.8774110861903236 name: f1 - type: accuracy value: 0.9784210744831022 name: accuracy --- # gbert-large-germaner This model is a fine-tuned version of [deepset/gbert-large](https://huggingface.co/deepset/gbert-large) on the germaner dataset. It achieves the following results on the evaluation set: - precision: 0.8693 - recall: 0.8856 - f1: 0.8774 - accuracy: 0.9784 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - num_train_epochs: 5 - train_batch_size: 8 - eval_batch_size: 8 - learning_rate: 2e-05 - weight_decay_rate: 0.01 - num_warmup_steps: 0 - fp16: True ### Framework versions - Transformers 4.18.0 - Datasets 1.18.0 - Tokenizers 0.12.1