Question Answering
Question Answering models can retrieve the answer to a question from a given text, which is useful for searching for an answer in a document. Some question answering models can generate answers without context!
Question
Which name is also used to describe the Amazon rainforest in English?
Context
The Amazon rainforest, also known in English as Amazonia or the Amazon Jungle
Answer
Amazonia
About Question Answering
Use Cases
Frequently Asked Questions
You can use Question Answering (QA) models to automate the response to frequently asked questions by using a knowledge base (documents) as context. Answers to customer questions can be drawn from those documents.
⚡⚡ If you’d like to save inference time, you can first use passage ranking models to see which document might contain the answer to the question and iterate over that document with the QA model instead.
Task Variants
There are different QA variants based on the inputs and outputs:
- Extractive QA: The model extracts the answer from a context. The context here could be a provided text, a table or even HTML! This is usually solved with BERT-like models.
- Open Generative QA: The model generates free text directly based on the context. You can learn more about the Text Generation task in its page.
- Closed Generative QA: In this case, no context is provided. The answer is completely generated by a model.
The schema above illustrates extractive, open book QA. The model takes a context and the question and extracts the answer from the given context.
You can also differentiate QA models depending on whether they are open-domain or closed-domain. Open-domain models are not restricted to a specific domain, while closed-domain models are restricted to a specific domain (e.g. legal, medical documents).
Inference
You can infer with QA models with the 🤗 Transformers library using the question-answering
pipeline. If no model checkpoint is given, the pipeline will be initialized with distilbert-base-cased-distilled-squad
. This pipeline takes a question and a context from which the answer will be extracted and returned.
from transformers import pipeline
qa_model = pipeline("question-answering")
question = "Where do I live?"
context = "My name is Merve and I live in İstanbul."
qa_model(question = question, context = context)
## {'answer': 'İstanbul', 'end': 39, 'score': 0.953, 'start': 31}
Useful Resources
Would you like to learn more about QA? Awesome! Here are some curated resources that you may find helpful!
- Course Chapter on Question Answering
- Question Answering Workshop
- How to Build an Open-Domain Question Answering System?
- Blog Post: ELI5 A Model for Open Domain Long Form Question Answering
Notebooks
Scripts for training
Documentation
Compatible libraries
Note A robust baseline model for most question answering domains.
Note Small yet robust model that can answer questions.
Note A special model that can answer questions from tables.
No example dataset is defined for this task.
Note Contribute by proposing a dataset for this task !
Note An application that can answer a long question from Wikipedia.
- exact-match
- Exact Match is a metric based on the strict character match of the predicted answer and the right answer. For answers predicted correctly, the Exact Match will be 1. Even if only one character is different, Exact Match will be 0
- f1
- The F1-Score metric is useful if we value both false positives and false negatives equally. The F1-Score is calculated on each word in the predicted sequence against the correct answer