Edit model card

EN, ES and NL to AMR parsing (stratified)

This version was trained on a subselection of the data. The AMR 3.0 corpus was translated to all the relevant languages. We then divided the dataset so that in total we only see a third of each language's dataset (so that in total we only see the full AMR 3.0 corpus in size once). In other words, all languages were undersampled for research purposes.

This model is a fine-tuned version of facebook/mbart-large-cc25 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5902
  • Smatch Precision: 74.83
  • Smatch Recall: 77.62
  • Smatch Fscore: 76.2
  • Smatch Unparsable: 0
  • Percent Not Recoverable: 0.2904

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 25

Training results

Training Loss Epoch Step Validation Loss Smatch Precision Smatch Recall Smatch Fscore Smatch Unparsable Percent Not Recoverable
0.3941 1.0 3477 1.8519 18.33 65.69 28.66 0 0.0
0.3983 2.0 6954 0.9133 29.25 72.49 41.68 0 0.1742
0.2932 3.0 10431 0.7729 34.75 74.02 47.29 0 0.0
0.2121 4.0 13908 0.7737 34.16 74.66 46.87 2 0.0
0.0401 5.0 17385 0.7656 36.6 75.39 49.27 0 0.0
0.1274 6.0 20862 0.7373 44.18 75.99 55.88 0 0.0
0.0668 7.0 24339 0.6024 50.13 77.11 60.76 0 0.0
0.0681 8.0 27816 0.6398 50.92 77.53 61.47 0 0.0
0.0381 9.0 31293 0.5849 57.36 77.99 66.1 0 0.1161
0.0586 10.0 34770 0.5628 59.08 77.76 67.15 0 0.0
0.0074 11.0 38247 0.5632 60.25 79.02 68.37 0 0.1742
0.0055 12.0 41724 0.5795 59.25 78.6 67.57 0 0.2904
0.0014 13.0 45201 0.5725 64.79 78.78 71.11 0 0.1161
0.0063 14.0 48678 0.5494 67.65 78.58 72.71 0 0.0
0.012 15.0 52155 0.5821 66.07 78.66 71.82 0 0.0581
0.0216 16.0 55632 0.5914 66.43 78.79 72.08 0 0.0581
0.0155 17.0 59109 0.5684 70.69 78.61 74.44 0 0.1161
0.0019 18.0 62586 0.5796 70.35 78.68 74.28 0 0.1161
0.0224 19.0 66063 0.5885 69.56 78.73 73.86 0 0.1742
0.0112 20.0 69540 0.5917 72.31 78.4 75.23 0 0.1161
0.0014 21.0 73017 0.6102 72.56 78.24 75.3 0 0.2323
0.0077 22.0 76494 0.5989 73.48 77.96 75.66 0 0.1742
0.0072 23.0 79971 0.5907 74.32 78.04 76.13 0 0.0581
0.0066 24.0 83448 0.5899 74.62 77.87 76.21 0 0.2323
0.0048 25.0 86925 0.5902 74.83 77.62 76.2 0 0.2904

Framework versions

  • Transformers 4.34.0.dev0
  • Pytorch 2.0.1+cu117
  • Datasets 2.14.2
  • Tokenizers 0.13.3
Downloads last month
1
Safetensors
Model size
614M params
Tensor type
F32
·

Finetuned from

Collection including BramVanroy/mbart-large-cc25-ft-amr30-en_es_nl-stratified