mvp / README.md
StevenTang's picture
Update README
c1d9aeb
metadata
license: apache-2.0
language:
  - en
tags:
  - text-generation
  - text2text-generation
  - summarization
  - conversational
pipeline_tag: text2text-generation
widget:
  - text: >-
      Summarize: You may want to stick it to your boss and leave your job, but
      don't do it if these are your reasons.
    example_title: Summarization
  - text: >-
      Given the dialog: do you like dance? [SEP] Yes I do. Did you know Bruce
      Lee was a cha cha dancer?
    example_title: Dialog
  - text: >-
      Describe the following data: Iron Man | instance of | Superhero [SEP] Stan
      Lee | creator | Iron Man
    example_title: Data-to-text
  - text: >-
      Given the story title: I think all public schools should have a uniform
      dress code.
    example_title: Story Generation
  - text: >-
      Answer the following question: From which country did Angola achieve
      independence in 1975?
    example_title: Question Answering
  - text: >-
      Generate the question based on the answer: boxing [X_SEP] A bolo punch is
      a punch used in martial arts . A hook is a punch in boxing .
    example_title: Question Generaion

MVP

The MVP model was proposed in MVP: Multi-task Supervised Pre-training for Natural Language Generation by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen.

The detailed information and instructions can be found https://github.com/RUCAIBox/MVP.

Model Description

MVP is supervised pre-trained using a mixture of labeled datasets. It follows a standard Transformer encoder-decoder architecture.

MVP is specially designed for natural language generation and can be adapted to a wide range of generation tasks, including but not limited to summarization, data-to-text generation, open-ended dialogue system, story generation, question answering, question generation, task-oriented dialogue system, commonsense generation, paraphrase generation, text style transfer, and text simplification. Our model can also be adapted to natural language understanding tasks such as sequence classification and (extractive) question answering.

Examples

For summarization:

>>> from transformers import MvpTokenizer, MvpForConditionalGeneration

>>> tokenizer = MvpTokenizer.from_pretrained("RUCAIBox/mvp")
>>> model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mvp")

>>> inputs = tokenizer(
...     "Summarize: You may want to stick it to your boss and leave your job, but don't do it if these are your reasons.",
...     return_tensors="pt",
... )
>>> generated_ids = model.generate(**inputs)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
["Why You Shouldn't Quit Your Job"]

For data-to-text generation:

>>> from transformers import MvpTokenizerFast, MvpForConditionalGeneration

>>> tokenizer = MvpTokenizerFast.from_pretrained("RUCAIBox/mvp")
>>> model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mvp")

>>> inputs = tokenizer(
...     "Describe the following data: Iron Man | instance of | Superhero [SEP] Stan Lee | creator | Iron Man",
...     return_tensors="pt",
... )
>>> generated_ids = model.generate(**inputs)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
['Stan Lee created the character of Iron Man, a fictional superhero appearing in American comic']

Related Models

MVP: https://huggingface.co/RUCAIBox/mvp.

Prompt-based models:

Multi-task models:

Citation

@article{tang2022mvp,
  title={MVP: Multi-task Supervised Pre-training for Natural Language Generation},
  author={Tang, Tianyi and Li, Junyi and Zhao, Wayne Xin and Wen, Ji-Rong},
  journal={arXiv preprint arXiv:2206.12131},
  year={2022},
  url={https://arxiv.org/abs/2206.12131},
}