metadata
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: tmp3y468_8j
results: []
widget:
- text: Ich liebe dich
example_title: Non-vulgar
- text: Ich hasse dich
example_title: Vulgar
KIZervus
This model is a fine-tuned version of distilbert-base-german-cased. It is trained to classify german text into the classes "vulgar" speech and "non-vulgar" speech. The data set is a collection of other labeled sources in german. For an overview, see the github repository here: It achieves the following results on the evaluation set:
- Train Loss: 0.4221
- Train Accuracy: 0.8025
- Validation Loss: 0.4418
- Validation Accuracy: 0.8094
- Epoch: 2
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 1233, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
Training results
Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
---|---|---|---|---|
0.4524 | 0.7813 | 0.4397 | 0.7969 | 0 |
0.4215 | 0.8030 | 0.4838 | 0.7781 | 1 |
0.4221 | 0.8025 | 0.4418 | 0.8094 | 2 |
Framework versions
- Transformers 4.21.1
- TensorFlow 2.8.2
- Datasets 2.2.2
- Tokenizers 0.12.1