PavanNeerudu's picture
Create README.md
e441137
metadata
language:
  - en
license: apache-2.0
datasets:
  - glue
metrics:
  - accuracy
model-index:
  - name: t5-base-finetuned-qnli
    results:
      - task:
          name: Text Classification
          type: text-classification
        dataset:
          name: GLUE QNLI
          type: glue
          args: qnli
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.9123

T5-base-finetuned-qnli

This model is T5 fine-tuned on GLUE QNLI dataset. It acheives the following results on the validation set

  • Accuracy: 0.9123

Model Details

T5 is an encoder-decoder model pre-trained on a multi-task mixture of unsupervised and supervised tasks and for which each task is converted into a text-to-text format.

Training procedure

Tokenization

Since, T5 is a text-to-text model, the labels of the dataset are converted as follows: For each example, a sentence as been formed as "qnli question: " + qnli_question + "sentence: " + qnli_sentence and fed to the tokenizer to get the input_ids and attention_mask. For each label, label is choosen as "equivalent" if label is 1, else label is "not_equivalent" and tokenized to get input_ids and attention_mask . During training, these inputs_ids having pad token are replaced with -100 so that loss is not calculated for them. Then these input ids are given as labels, and above attention_mask of labels is given as decoder attention mask.

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-4
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: epsilon=1e-08
  • num_epochs: 3.0

Training results

Epoch Training Loss Validation Accuracy
1 0.0571 0.8973
2 0.0329 0.9068
3 0.0133 0.9123