t5-small-nlg-multiwoz21
This model is a fine-tuned version of t5-small on MultiWOZ 2.1.
Refer to ConvLab-3 for model description and usage.
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 128
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 512
- optimizer: Adafactor
- lr_scheduler_type: linear
- num_epochs: 10.0
Framework versions
- Transformers 4.18.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
- Downloads last month
- 35
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Dataset used to train ConvLab/t5-small-nlg-multiwoz21
Evaluation results
- SER on MultiWOZ 2.1test set self-reported3.700
- BLEU on MultiWOZ 2.1test set self-reported35.800