Edit model card

iva_mt_wslot-m2m100_418M-en-ja

This model is a fine-tuned version of facebook/m2m100_418M on the iva_mt_wslot dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0153
  • Bleu: 66.503
  • Gen Len: 20.9519

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 7
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Bleu Gen Len
0.0185 1.0 2017 0.0164 63.4304 20.6499
0.0134 2.0 4034 0.0150 64.827 20.666
0.0104 3.0 6051 0.0146 64.465 21.2155
0.0079 4.0 8068 0.0148 64.8578 20.7915
0.0062 5.0 10085 0.0149 65.9149 21.0718
0.005 6.0 12102 0.0151 66.2905 20.8766
0.004 7.0 14119 0.0153 66.503 20.9519

Framework versions

  • Transformers 4.28.1
  • Pytorch 2.0.1+cu118
  • Datasets 2.12.0
  • Tokenizers 0.13.3

Citation

If you use this model, please cite the following:

@article{Sowanski2023SlotLI,
  title={Slot Lost in Translation? Not Anymore: A Machine Translation Model for Virtual Assistants with Type-Independent Slot Transfer},
  author={Marcin Sowanski and Artur Janicki},
  journal={2023 30th International Conference on Systems, Signals and Image Processing (IWSSIP)},
  year={2023},
  pages={1-5}
}
Downloads last month
3
Safetensors
Model size
486M params
Tensor type
F32
·

Evaluation results