Model description

This model is a sequence-to-sequence question generator with only the context as an input, and generates a question as an output.
It is based on a pretrained bart-base model, and trained on EQG-RACE corpus.

Intended uses & limitations

The model is trained to generate examinations-style multiple choice question.

How to use

The model takes context as an input sequence, and will generate a question as an output sequence. The max sequence length is 1024 tokens. Inputs should be organised into the following format:


The input sequence can then be encoded and passed as the input_ids argument in the model's generate() method.


Select AutoNLP in the “Train” menu to fine-tune this model automatically.

Downloads last month
Hosted inference API
Text2Text Generation
This model can be loaded on the Inference API on-demand.