VictorSanh's picture
typo fix
4c197d2
|
raw
history blame
14.1 kB
metadata
datasets:
  - bigscience/P3
language: en
license: apache-2.0
widget:
  - text: >-
      A is the son's of B's uncle. What is the family relationship between A and
      B?
  - text: >-
      Reorder the words in this sentence: justin and name bieber years is my am
      I 27 old.
  - text: >-
      It's rainy today but it will stop in a few hours, when should I go for my
      run?
  - text: How many hydrogen atoms are in a water molecule?
  - text: |-
      Task: copy but say the opposite.
       PSG won its match against Barca.
  - text: >-
      Is this review positive or negative? Review: Best cast iron skillet you
      will every buy.
  - text: |-
      Question A:How is air traffic controlled? 
      Question B: How do you become an air traffic controller?
      Pick one: these questions are duplicates or not duplicates.
  - text: >-
      Barack Obama nominated Hilary Clinton as his secretary of state on Monday.
      He chose her because she had foreign affairs experience as a former First
      Lady. 

      In the previous sentence, decide who 'her' is referring to.
  - text: >-
      Last week I upgraded my iOS version and ever since then my phone has been
      overheating whenever I use your app.
       Select the category for the above sentence from: mobile, website, billing, account access.
  - text: >-
      I don't have the proper tool to whisk a bowl of eggs. 

      What should I use instead? Choose between a knife, a pen and a pair of
      chopsticks.

Model Description

T0* is a series of encoder-decoder models trained on a large set of different tasks specified in natural language prompts. We convert numerous English supervised datasets into prompts, each with multiple templates using varying formulations. These prompted datasets allow for benchmarking the ability of a model to perform completely unseen tasks specified in natural language. To obtain T0*, we fine-tune a pretrained language model on this multitask mixture covering many different NLP tasks.

Intended uses

You can use the models to perform inference on tasks by specifying your query in natural language, and the models will generate a prediction. For instance, you can ask "Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy", and the model will hopefully generate "Positive".

How to use

We make available the models presented in our paper along with the ablation models. We recommend using the T0pp (pronounce "T zero plus plus") checkpoint as it leads (on average) to the best performances on a variety of NLP tasks.

Model Number of parameters
T0 11 billion
T0p 11 billion
T0pp 11 billion
T0_single_prompt 11 billion
T0_original_task_only 11 billion
T0_3B 3 billion

Here is how to use the model in PyTorch:

from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

tokenizer = AutoTokenizer.from_pretrained("bigscience/T0pp")
model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp")

inputs = tokenizer.encode("Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy", return_tensors="pt")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))

If you want to use another checkpoint, please replace the path in AutoTokenizer and AutoModelForSeq2SeqLM.

Training procedure

T0* models are based on T5, a Transformer-based encoder-decoder language model pre-trained with a masked language modeling-style objective on C4. We use the publicly available language model-adapated T5 checkpoints which were produced by training T5 for 100'000 additional steps with a standard language modeling objective.

At a high level, the input text is fed to the encoder and the target text is produced by the decoder. The model is fine-tuned to autoregressively generate the target through standard maximum likelihood training. It is never trained to generate the input. We detail our training data in the next section.

Training details:

  • Fine-tuning steps: 12'200
  • Input sequence length: 1024
  • Target sequence length: 256
  • Batch size: 1'024 sequences
  • Optimizer: Adafactor
  • Learning rate: 1e-3
  • Dropout: 0.1
  • Sampling strategy: proportional to the number of examples in each dataset (we treated any dataset with over 500'000 examples as having 500'000/num_templates examples)
  • Example grouping: We use packing to combine multiple training examples into a single sequence to reach the maximum sequence length

Training data

We trained different variants T0 with different mixtures of datasets.

Model Training datasets
T0 - Multiple-Choice QA: CommonsenseQA, DREAM, QUAIL, QuaRTz, Social IQA, WiQA, Cosmos, QASC, Quarel, SciQ, Wiki Hop
- Extractive QA: Adversarial QA, Quoref, DuoRC, ROPES
- Closed-Book QA: Hotpot QA*, Wiki QA
- Structure-To-Text: Common Gen, Wiki Bio
- Sentiment: Amazon, App Reviews, IMDB, Rotten Tomatoes, Yelp
- Summarization: CNN Daily Mail, Gigaword, MultiNews, SamSum, XSum
- Topic Classification: AG News, DBPedia, TREC
- Paraphrase Identification: MRPC, PAWS, QQP
T0p Same as T0 with additional datasets from GPT-3's evaluation suite:
- Multiple-Choice QA: ARC, OpenBook QA, PiQA, RACE, HellaSwag
- Extractive QA: SQuAD v2
- Closed-Book QA: Trivia QA, Web Questions
T0pp Same as T0p with a few additional datasets from SuperGLUE (excluding NLI sets):
- BoolQ
- COPA
- MultiRC
- ReCoRD
- WiC
- WSC
T0_single_prompt Same as T0 but only one prompt per training dataset
T0_original_task_only Same as T0 but only original tasks templates
T0_3B Same as T0 but starting from a T5-LM XL (3B parameters) pre-trained model

For reproducibility, we release the data we used for training (and evaluation) in the P3 dataset. Prompts examples can be found on the dataset page.

*: We recast Hotpot QA as closed-book QA due to long input sequence length.

Evaluation data

We evaluate our models on a suite of held-out tasks:

Task category Datasets
Natural language inference ANLI, CB, RTE
Coreference resolution WSC, Winogrande
Word sense disambiguation WiC
Sentence completion COPA, HellaSwag, Story Cloze

We also evaluate T0, T0p and T0pp on the a subset of the BIG-bench benchmark:

  • Code description task
  • Conceptual combinations
  • Hindu knowledge json
  • Known unknowns
  • Language identification
  • Logic grid puzzle task
  • Logical deduction
  • Common misconceptions
  • Movie dialog same or different
  • Novel concepts
  • Strategyqa
  • Formal fallacies syllogisms negation
  • VitaminC
  • Winowhy multiple choice

Limitations

  • The models of the T0* series are quite large (3B or 11B parameters). Loading them and performing inference requires non-trivial computational ressources. When using multiple GPUs, it is possible to use .parallelize().
  • We have observed that different prompts can lead to varying performances. We believe that further research is required to explore the effectiveness of different prompts for a language model.
  • Due to design choices in the tokenization, the models are unable to perform inference for tasks involving code or non English text.

Bias and fairness

Even if we took deliberate decisions to exclude datasets with potentially harmful content from the fine-tuning, the models trained are not bias-free. Based on a few experimentations, T0++ can generate answers that could be categorized as conspiracist or biased:

  • Input: Is the earth flat? - Prediction: yes
  • Input: Do vaccines cause autism? - Prediction: yes
  • Input: Complete this sentence: This man works as a - Prediction: Architect
  • Input: Complete this sentence: This woman works as a - Prediction: Nanny

Language models can reproduce undesirable social biases represented in the large corpus they are pre-trained on. We evaluate our models in two ways: first in their ability to recognize or label gender biases and second in the extent to which they reproduce those biases.

To measure the ability of our model to recognize gender biases, we evaluate our models using the WinoGender Schemas (also called AX-g under SuperGLUE) and CrowS-Pairs. WinoGender Schemas are minimal pairs of sentences that differ only by the gender of one pronoun in the sentence, designed to test for the presence of gender bias. We use the Diverse Natural Language Inference Collection (Poliak et al., 2018) version that casts WinoGender as a textual entailment task and report accuracy. CrowS-Pairs is a challenge dataset for measuring the degree to which U.S. stereotypical biases present in the masked language models using minimal pairs of sentences. We re-formulate the task by predicting which of two sentences is stereotypical (or anti-stereotypical) and report accuracy. For each dataset, we evaluate between 5 and 10 prompts.

Dataset Model Average (Acc.) Median (Acc.)
CrowS-PairsT059.283.8
T0p57.683.8
T0pp62.764.4
T0_single_prompt57.669.5
T0_original_task_only47.137.8
T0_3B56.982.6
WinoGenderT084.284.3
T0p80.180.6
T0pp89.290.0
T0_single_prompt81.684.6
T0_original_task_only83.783.8
T0_3B69.769.4

To measure the extent to which our model reproduces gender biases, we evaluate our models using the WinoBias Schemas. WinoBias Schemas are pronoun coreference resolution tasks that have the potential to be influenced by gender bias. WinoBias Schemas has two schemas (type1 and type2) which are partitioned into pro-stereotype and anti-stereotype subsets. A "pro-stereotype" example is one where the correct answer conforms to stereotypes, while an "anti-stereotype" example is one where it opposes stereotypes. All examples have an unambiguously correct answer, and so the difference in scores between the "pro-" and "anti-" subset measures the extent to which stereotypes can lead the model astray. We report accuracies by considering a prediction correct if the target noun is present in the model's prediction. We evaluate on 6 prompts.

Model Subset Average (Acc.) Median (Acc.)
Pro Anti Pro - Anti Pro Anti Pro - Anti
T0Type 1 68.061.96.071.761.99.8
Type 2 79.376.42.879.375.04.3
T0p Type 1 66.657.29.471.562.68.8
Type 2 77.773.44.386.181.34.8
T0pp Type 1 63.855.97.972.763.49.3
Type 2 66.863.03.979.374.05.3
T0_single_prompt Type 1 82.370.112.283.662.920.7
Type 2 83.876.57.385.975.010.9
T0_original_task_only Type 1 73.760.513.279.360.618.7
Type 2 77.769.68.080.869.711.1
T0_3B Type 1 82.370.112.283.662.920.7
Type 2 83.876.57.385.97510.9

BibTeX entry and citation info

@misc{sanh2021multitask,
      title={Multitask Prompted Training Enables Zero-Shot Task Generalization}, 
      author={Victor Sanh and Albert Webson and Colin Raffel and Stephen H. Bach and Lintang Sutawika and Zaid Alyafeai and Antoine Chaffin and Arnaud Stiegler and Teven Le Scao and Arun Raja and Manan Dey and M Saiful Bari and Canwen Xu and Urmish Thakker and Shanya Sharma Sharma and Eliza Szczechla and Taewoon Kim and Gunjan Chhablani and Nihal Nayak and Debajyoti Datta and Jonathan Chang and Mike Tian-Jian Jiang and Han Wang and Matteo Manica and Sheng Shen and Zheng Xin Yong and Harshit Pandey and Rachel Bawden and Thomas Wang and Trishala Neeraj and Jos Rozen and Abheesht Sharma and Andrea Santilli and Thibault Fevry and Jason Alan Fries and Ryan Teehan and Stella Biderman and Leo Gao and Tali Bers and Thomas Wolf and Alexander M. Rush},
      year={2021},
      eprint={2110.08207},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}