Edit model card

gpt2-finetuned-stsb

This model is GPT-2 fine-tuned on GLUE STS-B dataset. It acheives the following results on the validation set

  • PCC: 0.74999

Model Details

GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences. However, it acheives very good results on Text Classification tasks.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-5
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 123
  • optimizer: epsilon=1e-08
  • num_epochs: 4

Training results

Epoch Training Loss Training PCC Validation Loss Validation PCC
1 3.14066 0.09220 2.45140 0.11778
2 1.96428 0.30958 1.54366 0.58155
3 1.53877 0.53427 1.14102 0.71384
4 1.29935 0.62852 1.00576 0.74999
Downloads last month
8

Dataset used to train PavanNeerudu/gpt2-finetuned-stsb

Evaluation results