File size: 1,015 Bytes
f4d80d8 0e6ee2e f4d80d8 0e6ee2e 2532e7a 0e6ee2e 2532e7a 0e6ee2e 060a0eb 0e6ee2e 9dfd49d 0e6ee2e 9dfd49d 0e6ee2e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 |
---
datasets:
- DEplain/DEplain-APA-sent
language:
- de
metrics:
- sari
- bleu
- bertscore
library_name: transformers
base_model: google/mT5-large
pipeline_tag: text2text-generation
---
# Model Card for mT5-large-trimmed_deplain-apa
Finetuned mT5-Model for German sentence-level text-simplification.
## Model Details
### Model Description
- **Model type:** Encoder-Decoder-Transformer
- **Language(s) (NLP):** German
- **Finetuned from model:** google/mT5-large
- **Task**: Text-Simplification
## Training Details
### Training Data
[DEplain/DEplain-APA-sent](https://huggingface.co/datasets/DEplain/DEplain-APA-sent) \
Stodden et al. (2023): [arXiv:2305.18939](arXiv:2305.18939)
### Training Procedure
Parameter-efficient Fine-Tuning with LoRA. Vocabulary trimmed to 32.000 most frequent tokens for German.
#### Training Hyperparameters
* Batch Size: 16
* Epochs: 1
* Learning Rate: 0,001
* Optimizer: Adafactor
#### LoRA Hyperparameters
* R: 32
* Alpha: 64
* Dropout:
* Target modules: all linear layers |