Edit model card

T5ForSequenceClassification

T5ForSequenceClassification adapts the original T5 architecture for sequence classification tasks.

T5 was originally built for text-to-text tasks and excels in it. It can handle any NLP task if it has been converted to a text-to-text format, including sequence classification task! You can find here how the original T5 is used for sequence classification task.

Our motivations for building T5ForSequenceClassification is that the full original T5 architecture is not needed for most NLU tasks. Indeed, NLU tasks generally do not require to generate text and thus a large decoder is unnecessary. By removing the decoder we can half the original number of parameters (thus half the computation cost) and efficiently optimize the network for the given task.

Table of Contents

  1. Usage
  2. Why use T5ForSequenceClassification?
  3. T5ForClassification vs T5
  4. Results

Usage

T5ForSequenceClassification supports the task of zero-shot classification. It can direclty be used for:

  • topic classification
  • intent recognition
  • boolean question answering
  • sentiment analysis
  • and any other task which goal is to clasify a text...

Since the T5ForClassification class is currently not supported by the transformers library, you cannot direclty use this model on the Hub. To use T5ForSequenceClassification, you will have to install additional packages and model weights. You can find instructions here.

Why use T5ForSequenceClassification?

Models based on the BERT architecture like RoBERTa and DeBERTa have shown very strong performance on sequence classification task and are still widely used today. However, those models only scale up to ~1.5B parameters (DeBERTa xxlarge) resulting in a limited knowledge compare to bigger models. On the other hand, models based on the T5 architecture scale up to ~11B parameters (t5-xxl) and innovations with this architecture are very recent and keeps improving (mT5, Flan-T5, UL2, Flan-UL2, and probably more...)

T5ForClassification vs T5

T5ForClassification Architecture:

  • Encoder: same as original T5
  • Decoder: only the first layer (for pooling purpose)
  • Classification head: simple Linear layer on top of the decoder

Benefits and Drawbacks:

  • (+) Keeps T5 encoding strength
  • (+) Parameters size is half
  • (+) Interpretable outputs (class logits)
  • (+) No generation mistakes and faster prediction (no generation latency)
  • (-) Looses text-to-text ability

Results

Results on the validation data of training tasks:

Dataset Accuracy F1
MNLI (m) 0.923 0.923
MNLI (mm) 0.922 0.922
SNLI 0.942 0.942
SciTail 0.966 0.647

Results on validation data of unseen tasks (zero-shot):

Dataset Accuracy F1
? ? ?

Special thanks to philschmid for making a Flan-T5-xxl checkpoint in fp16.

Downloads last month
4

Datasets used to train AntoineBlanot/flan-t5-xxl-classif-3way

Evaluation results