Edit model card

Distilbert-Base-Uncased-Go-Emotion

Model description:

Not working fine

Training Parameters:

  Num Epochs = 3
  Instantaneous batch size per device = 32
  Total train batch size (w. parallel, distributed & accumulation) = 32
  Gradient Accumulation steps = 1
  Total optimization steps = 15831

TrainOutput:

'train_loss': 0.105500

Evalution Output:

 'eval_accuracy_thresh': 0.962023913860321,
 'eval_loss': 0.11090277135372162,

Colab Notebook:

Notebook

Downloads last month
21

Dataset used to train bhadresh-savani/distilbert-base-uncased-go-emotion