File size: 2,030 Bytes
aa66dea
7935aed
 
aa66dea
7935aed
 
 
 
 
 
 
f6b8d1e
 
 
 
 
 
 
aa66dea
7935aed
 
 
 
f6b8d1e
7935aed
 
 
 
 
f6b8d1e
7935aed
f6b8d1e
 
7935aed
f6b8d1e
 
7935aed
f6b8d1e
 
7935aed
f6b8d1e
aa66dea
 
7935aed
 
aa66dea
7935aed
aa66dea
7935aed
aa66dea
7935aed
 
 
 
aa66dea
091610d
 
 
 
aa66dea
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7935aed
 
 
 
 
 
 
aa66dea
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
---
language:
- de
license: mit
datasets:
- germaner
metrics:
- precision
- recall
- f1
- accuracy
widget:
- text: 'Philipp ist 26 Jahre alt und lebt in Nürnberg, Deutschland. Derzeit arbeitet
    er als Machine Learning Engineer und Tech Lead bei Hugging Face, um künstliche
    Intelligenz durch Open Source und Open Science zu demokratisieren.

    '
base_model: deepset/gbert-base
model-index:
- name: gbert-base-germaner
  results:
  - task:
      type: token-classification
      name: Token Classification
    dataset:
      name: germaner
      type: germaner
      args: default
    metrics:
    - type: precision
      value: 0.8520523797532108
      name: precision
    - type: recall
      value: 0.8754204398447607
      name: recall
    - type: f1
      value: 0.8635783563042368
      name: f1
    - type: accuracy
      value: 0.976147969774973
      name: accuracy
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# gbert-base-germaner

This model is a fine-tuned version of [deepset/gbert-base](https://huggingface.co/deepset/gbert-base) on the germaner dataset.
It achieves the following results on the evaluation set:
- precision: 0.8521
- recall: 0.8754
- f1: 0.8636
- accuracy: 0.9761

If you want to learn how to fine-tune BERT yourself using Keras and Tensorflow check out this blog post: 

https://www.philschmid.de/huggingface-transformers-keras-tf

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- num_train_epochs: 5
- train_batch_size: 16
- eval_batch_size: 32
- learning_rate: 2e-05
- weight_decay_rate: 0.01
- num_warmup_steps: 0
- fp16: True

### Framework versions

- Transformers 4.14.1
- Datasets 1.16.1
- Tokenizers 0.10.3