bart-distractor-generation-both

Model description

This model is a sequence-to-sequence question generator which takes an answer and context as an input, and generates a question as an output.
It is based on a pretrained bart-base model.

How to use

The model takes concatenated context and answers as an input sequence, and will generate a full distractor sentence as an output sequence. The max sequence length is 1024 tokens. Inputs should be organised into the following format:

answer \n context 

The input sequence can then be encoded and passed as the input_ids argument in the model's generate() method.

New

Select AutoNLP in the “Train” menu to fine-tune this model automatically.

Downloads last month
15
Hosted inference API
Text2Text Generation
This model can be loaded on the Inference API on-demand.