Edit model card

En-Nso

This model is a fine-tuned version of kabelomalapane/en_nso_ukuxhumana_model on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 2.9067
  • Bleu: 23.5436

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 4
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Bleu
No log 1.0 14 3.7614 8.0360
No log 2.0 28 3.3181 20.7201
No log 3.0 42 3.1627 21.5932
No log 4.0 56 3.0935 22.0268
No log 5.0 70 3.0227 21.0859
No log 6.0 84 2.9740 21.6963
No log 7.0 98 2.9419 23.2214
No log 8.0 112 2.9227 24.4649
No log 9.0 126 2.9102 23.5293
No log 10.0 140 2.9067 23.5516

Framework versions

  • Transformers 4.20.1
  • Pytorch 1.11.0+cu113
  • Datasets 2.3.2
  • Tokenizers 0.12.1
Downloads last month
2
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.