Edit model card

MTL-question-answering

The MTL-question-answering model was proposed in MVP: Multi-task Supervised Pre-training for Natural Language Generation by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen.

The detailed information and instructions can be found https://github.com/RUCAIBox/MVP.

Model Description

MTL-question-answering is supervised pre-trained using a mixture of labeled question answering datasets. It is a variant (Single) of our main MVP model. It follows a standard Transformer encoder-decoder architecture.

MTL-question-answering is specially designed for question answering tasks, such as reading comprehension (SQuAD), conversational question answering (CoQA) and closed-book question-answering (Natural Questions).

Example

>>> from transformers import MvpTokenizer, MvpForConditionalGeneration

>>> tokenizer = MvpTokenizer.from_pretrained("RUCAIBox/mvp")
>>> model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mtl-question-answering")

>>> inputs = tokenizer(
...     "Answer the following question: From which country did Angola achieve independence in 1975?",
...     return_tensors="pt",
... )
>>> generated_ids = model.generate(**inputs)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
['Portugal']

Related Models

MVP: https://huggingface.co/RUCAIBox/mvp.

Prompt-based models:

Multi-task models:

Citation

@article{tang2022mvp,
  title={MVP: Multi-task Supervised Pre-training for Natural Language Generation},
  author={Tang, Tianyi and Li, Junyi and Zhao, Wayne Xin and Wen, Ji-Rong},
  journal={arXiv preprint arXiv:2206.12131},
  year={2022},
  url={https://arxiv.org/abs/2206.12131},
}
Downloads last month
15
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.