Transformers documentation

Question answering

You are viewing v4.17.0 version. A newer version v4.38.2 is available.
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Question answering

Question answering tasks return an answer given a question. There are two common forms of question answering:

  • Extractive: extract the answer from the given context.
  • Abstractive: generate an answer from the context that correctly answers the question.

This guide will show you how to fine-tune DistilBERT on the SQuAD dataset for extractive question answering.

See the question answering task page for more information about other forms of question answering and their associated models, datasets, and metrics.

Load SQuAD dataset

Load the SQuAD dataset from the 🤗 Datasets library:

>>> from datasets import load_dataset

>>> squad = load_dataset("squad")

Then take a look at an example:

>>> squad["train"][0]
{'answers': {'answer_start': [515], 'text': ['Saint Bernadette Soubirous']},
 'context': 'Architecturally, the school has a Catholic character. Atop the Main Building\'s gold dome is a golden statue of the Virgin Mary. Immediately in front of the Main Building and facing it, is a copper statue of Christ with arms upraised with the legend "Venite Ad Me Omnes". Next to the Main Building is the Basilica of the Sacred Heart. Immediately behind the basilica is the Grotto, a Marian place of prayer and reflection. It is a replica of the grotto at Lourdes, France where the Virgin Mary reputedly appeared to Saint Bernadette Soubirous in 1858. At the end of the main drive (and in a direct line that connects through 3 statues and the Gold Dome), is a simple, modern stone statue of Mary.',
 'id': '5733be284776f41900661182',
 'question': 'To whom did the Virgin Mary allegedly appear in 1858 in Lourdes France?',
 'title': 'University_of_Notre_Dame'
}

The answers field is a dictionary containing the starting position of the answer and the text of the answer.

Preprocess

Load the DistilBERT tokenizer to process the question and context fields:

>>> from transformers import AutoTokenizer

>>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")

There are a few preprocessing steps particular to question answering that you should be aware of:

  1. Some examples in a dataset may have a very long context that exceeds the maximum input length of the model. Truncate only the context by setting truncation="only_second".
  2. Next, map the start and end positions of the answer to the original context by setting return_offset_mapping=True.
  3. With the mapping in hand, you can find the start and end tokens of the answer. Use the sequence_ids method to find which part of the offset corresponds to the question and which corresponds to the context.

Here is how you can create a function to truncate and map the start and end tokens of the answer to the context:

>>> def preprocess_function(examples):
...     questions = [q.strip() for q in examples["question"]]
...     inputs = tokenizer(
...         questions,
...         examples["context"],
...         max_length=384,
...         truncation="only_second",
...         return_offsets_mapping=True,
...         padding="max_length",
...     )

...     offset_mapping = inputs.pop("offset_mapping")
...     answers = examples["answers"]
...     start_positions = []
...     end_positions = []

...     for i, offset in enumerate(offset_mapping):
...         answer = answers[i]
...         start_char = answer["answer_start"][0]
...         end_char = answer["answer_start"][0] + len(answer["text"][0])
...         sequence_ids = inputs.sequence_ids(i)

...         # Find the start and end of the context
...         idx = 0
...         while sequence_ids[idx] != 1:
...             idx += 1
...         context_start = idx
...         while sequence_ids[idx] == 1:
...             idx += 1
...         context_end = idx - 1

...         # If the answer is not fully inside the context, label it (0, 0)
...         if offset[context_start][0] > end_char or offset[context_end][1] < start_char:
...             start_positions.append(0)
...             end_positions.append(0)
...         else:
...             # Otherwise it's the start and end token positions
...             idx = context_start
...             while idx <= context_end and offset[idx][0] <= start_char:
...                 idx += 1
...             start_positions.append(idx - 1)

...             idx = context_end
...             while idx >= context_start and offset[idx][1] >= end_char:
...                 idx -= 1
...             end_positions.append(idx + 1)

...     inputs["start_positions"] = start_positions
...     inputs["end_positions"] = end_positions
...     return inputs

Use 🤗 Datasets map function to apply the preprocessing function over the entire dataset. You can speed up the map function by setting batched=True to process multiple elements of the dataset at once. Remove the columns you don’t need:

>>> tokenized_squad = squad.map(preprocess_function, batched=True, remove_columns=squad["train"].column_names)

Use DefaultDataCollator to create a batch of examples. Unlike other data collators in 🤗 Transformers, the DefaultDataCollator does not apply additional preprocessing such as padding.

>>> from transformers import DefaultDataCollator >>> data_collator = DefaultDataCollator()

Fine-tune with Trainer

Load DistilBERT with AutoModelForQuestionAnswering:

>>> from transformers import AutoModelForQuestionAnswering, TrainingArguments, Trainer

>>> model = AutoModelForQuestionAnswering.from_pretrained("distilbert-base-uncased")

If you aren’t familiar with fine-tuning a model with the Trainer, take a look at the basic tutorial here!

At this point, only three steps remain:

  1. Define your training hyperparameters in TrainingArguments.
  2. Pass the training arguments to Trainer along with the model, dataset, tokenizer, and data collator.
  3. Call train() to fine-tune your model.
>>> training_args = TrainingArguments(
...     output_dir="./results",
...     evaluation_strategy="epoch",
...     learning_rate=2e-5,
...     per_device_train_batch_size=16,
...     per_device_eval_batch_size=16,
...     num_train_epochs=3,
...     weight_decay=0.01,
... )

>>> trainer = Trainer(
...     model=model,
...     args=training_args,
...     train_dataset=tokenized_squad["train"],
...     eval_dataset=tokenized_squad["validation"],
...     tokenizer=tokenizer,
...     data_collator=data_collator,
... )

>>> trainer.train()

Fine-tune with TensorFlow

To fine-tune a model in TensorFlow is just as easy, with only a few differences.

If you aren’t familiar with fine-tuning a model with Keras, take a look at the basic tutorial here!

Convert your datasets to the tf.data.Dataset format with to_tf_dataset. Specify inputs and the start and end positions of an answer in columns, whether to shuffle the dataset order, batch size, and the data collator:

>>> tf_train_set = tokenized_squad["train"].to_tf_dataset(
...     columns=["attention_mask", "input_ids", "start_positions", "end_positions"],
...     dummy_labels=True,
...     shuffle=True,
...     batch_size=16,
...     collate_fn=data_collator,
... )

>>> tf_validation_set = tokenized_squad["validation"].to_tf_dataset(
...     columns=["attention_mask", "input_ids", "start_positions", "end_positions"],
...     dummy_labels=True,
...     shuffle=False,
...     batch_size=16,
...     collate_fn=data_collator,
... )

Set up an optimizer function, learning rate schedule, and some training hyperparameters:

>>> from transformers import create_optimizer

>>> batch_size = 16
>>> num_epochs = 2
>>> total_train_steps = (len(tokenized_squad["train"]) // batch_size) * num_epochs
>>> optimizer, schedule = create_optimizer(
...     init_lr=2e-5,
...     num_warmup_steps=0,
...     num_train_steps=total_train_steps,
... )

Load DistilBERT with TFAutoModelForQuestionAnswering:

>>> from transformers import TFAutoModelForQuestionAnswering

>>> model = TFAutoModelForQuestionAnswering("distilbert-base-uncased")

Configure the model for training with compile:

>>> import tensorflow as tf

>>> model.compile(optimizer=optimizer)

Call fit to fine-tune the model:

>>> model.fit(x=tf_train_set, validation_data=tf_validation_set, epochs=3)

For a more in-depth example of how to fine-tune a model for question answering, take a look at the corresponding PyTorch notebook or TensorFlow notebook.