Edit model card

mle-policy-multiwoz21

This is a MLE model trained on MultiWOZ 2.1

Refer to ConvLab-3 for model description and usage.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 32
  • seed: 0
  • optimizer: Adam
  • num_epochs: 24
  • use checkpoint which performed best on validation set

Framework versions

  • Transformers 4.18.0
  • Pytorch 1.10.2+cu111
Downloads last month
0
Hosted inference API

Unable to determine this model’s pipeline type. Check the docs .

Dataset used to train ConvLab/mle-policy-multiwoz21