julien-c's picture
julien-c HF staff
Migrate model card from transformers-repo
58b7400
metadata
language: en
datasets:
  - squad_v2

T5-base fine-tuned on SQuAD v2

Google's T5 fine-tuned on SQuAD v2 for Q&A downstream task.

Details of T5

The T5 model was presented in Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu in Here the abstract:

Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new β€œColossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.

model image

Details of the downstream task (Q&A) - Dataset πŸ“š 🧐 ❓

Dataset ID: squad_v2 from Huggingface/NLP

Dataset Split # samples
squad_v2 train 130319
squad_v2 valid 11873

How to load it from nlp

train_dataset  = nlp.load_dataset('squad_v2', split=nlp.Split.TRAIN)
valid_dataset = nlp.load_dataset('squad_v2', split=nlp.Split.VALIDATION)

Check out more about this dataset and others in NLP Viewer

Model fine-tuning πŸ‹οΈβ€

The training script is a slightly modified version of this one

Results πŸ“

Metric # Value
EM 77.64
F1 81.32

Model in Action πŸš€

from transformers import AutoModelForSeq2SeqLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-base-finetuned-squadv2")
model = AutoModelForSeq2SeqLM.from_pretrained("mrm8488/t5-base-finetuned-squadv2")

def get_answer(question, context):
  input_text = "question: %s  context: %s" % (question, context)
  features = tokenizer([input_text], return_tensors='pt')

  output = model.generate(input_ids=features['input_ids'], 
               attention_mask=features['attention_mask'])
  
  return tokenizer.decode(output[0])

context = "Manuel have created RuPERTa-base with the support of HF-Transformers and Google"
question = "Who has supported Manuel?"

get_answer(question, context)

# output: 'HF-Transformers and Google'

Created by Manuel Romero/@mrm8488 | LinkedIn

Made with β™₯ in Spain