Edit model card

Model name

Model description

This model mines the question-answer pairs from a given context in an end2end fashion. It takes a context as an input and generates a list of questions and answers as an output. It is based on a pre-trained t5-small model and uses a prompt enigneering technique to train.

How to use

The model takes the context (with prompt) as an input sequence and will generate question-answer pairs as an output sequence. The max sequence length is 512 tokens. Inputs should be organized into the following format:

context: context text here. generate questions and answers:

The input sequence can then be encoded and passed as the input_ids argument in the model's generate() method.

You can try out the demo in the E2E-QA-mining space app

Limitations and bias

The model is limited to generating questions in the same style as those found in SQuAD, The generated questions can potentially be leading or reflect biases that are present in the context. If the context is too short or completely absent, or if the context and answer do not match, the generated question is likely to be incoherent.

Training data

The model was fine-tuned on a dataset made up of several well-known QA datasets (SQuAD

Source and Citation

Please find our code and cite us in this repo https://github.com/jian-mo/E2E-QA-Mining

Downloads last month
45

Dataset used to train mojians/E2E-QA-Mining

Space using mojians/E2E-QA-Mining 1