Edit model card

flan-t5-base-mnli

flan-t5-base-mnli is the flan-T5 base model fine-tuned on the Multi-Genre Natural Language Inference (MNLI) corpus.

Overview

  • License: MIT
  • Language model: flan-t5-base
  • Language: English
  • Downstream-task: Zero-shot Classification, Text Classification
  • Training data: MNLI
  • Eval data: MNLI (Matched and Mismatched)
  • Infrastructure: 1x NVIDIA 3070

Model Usage

Use the code below to get started with the model. The model can be loaded with the zero-shot-classification pipeline like so:

from transformers import pipeline
classifier = pipeline(
  'zero-shot-classification',
  model='sjrhuschlee/flan-t5-base-mnli',
  trust_remote_code=True,
)

You can then use this pipeline to classify sequences into any of the class names you specify. For example:

sequence_to_classify = "one day I will see the world"
candidate_labels = ['travel', 'cooking', 'dancing']
classifier(sequence_to_classify, candidate_labels)
# {'sequence': 'one day I will see the world',
#  'labels': ['travel', 'cooking', 'dancing'],
#  'scores': [0.7944864630699158, 0.10624771565198898, 0.09926578402519226]}

Metrics

# MNLI
{
    "eval_accuracy": 0.8746816097809476,
    "eval_accuracy_mm": 0.8727624084621644,
    "eval_loss": 0.4271220564842224,
    "eval_loss_mm": 0.4265698492527008,
    "eval_samples": 9815,
    "eval_samples_mm": 9832,
}

Uses

Direct Use

This fine-tuned model can be used for zero-shot classification tasks, including zero-shot sentence-pair classification, and zero-shot sequence classification.

Misuse and Out-of-scope Use

The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.

Risks, Limitations and Biases

CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.

Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).

Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:

sequence_to_classify = "The CEO had a strong handshake."
candidate_labels = ['male', 'female']
hypothesis_template = "This text speaks about a {} profession."
classifier(sequence_to_classify, candidate_labels, hypothesis_template=hypothesis_template)

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.

Downloads last month
71
Safetensors
Model size
223M params
Tensor type
F32
·

Dataset used to train sjrhuschlee/flan-t5-base-mnli

Evaluation results