Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Overview

Language model: Pegasus-xsum
Language: English
Downstream-task: Question-Answering Generation
Training data: SQuAD 2.0, NewsQA
Eval data: SQuAD 2.0, NewsQA
Infrastructure: Nvidia Tesla K80 12Gb RAM

Hyperparameters

per_device_train_batch_size = 2
per_device_eval_batch_size = 2
num_train_epochs = 3
base_LM_model = "pegasus-xsum"
source_max_token_len = 256
target_max_token_len = 64
learning_rate = 5e-5
lr_schedule = LinearWarmup
warmup_steps = 150

Usage

import transformers
from transformers import PegasusForConditionalGeneration, PegasusTokenizerFast

model_name = 'nloc2578/QAG_Pegasus_3ep_eval'
tokenizer = PegasusTokenizerFast.from_pretrained(model_name)
model = PegasusForConditionalGeneration.from_pretrained(model_name, pad_token_id=tokenizer.eos_token_id)

text = '''The primary goal of distractor generation is generating answer
options that are plausibly answers to the question, and might appear
correct to a user who does know the correct answer. Distractors
should also be clearly distinct from the key and each other and
they should not be correct answers to the question (for questions
that might have multiple correct answers).'''

input_id = tokenizer(text, return_tensors='pt')
output = model.generate(input_id['input_ids'])
result = tokenizer.decode(output[0])

print(result)
Downloads last month
35

Space using nloc2578/QAG_Pegasus_3ep_eval 1