File size: 1,608 Bytes
fe276dd 5907b64 fe276dd 5907b64 89b8f4d fe276dd 5907b64 89b8f4d 5907b64 89b8f4d 5907b64 89b8f4d 5907b64 89b8f4d 5907b64 89b8f4d 5907b64 89b8f4d fe276dd 5907b64 fe276dd 5907b64 fe276dd 5907b64 fe276dd 5907b64 fe276dd 5907b64 fe276dd |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 |
---
language:
- de
license: mit
datasets:
- germaner
metrics:
- precision
- recall
- f1
- accuracy
base_model: deepset/gbert-large
model-index:
- name: gbert-large-germaner
results:
- task:
type: token-classification
name: Token Classification
dataset:
name: germaner
type: germaner
args: default
metrics:
- type: precision
value: 0.8693333333333333
name: precision
- type: recall
value: 0.885640362225097
name: recall
- type: f1
value: 0.8774110861903236
name: f1
- type: accuracy
value: 0.9784210744831022
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gbert-large-germaner
This model is a fine-tuned version of [deepset/gbert-large](https://huggingface.co/deepset/gbert-large) on the germaner dataset.
It achieves the following results on the evaluation set:
- precision: 0.8693
- recall: 0.8856
- f1: 0.8774
- accuracy: 0.9784
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- num_train_epochs: 5
- train_batch_size: 8
- eval_batch_size: 8
- learning_rate: 2e-05
- weight_decay_rate: 0.01
- num_warmup_steps: 0
- fp16: True
### Framework versions
- Transformers 4.18.0
- Datasets 1.18.0
- Tokenizers 0.12.1
|