Model Overview

This is an ELECTRA-Large QA Model trained from in two stages. First, it is trained on synthetic adversarial data generated using a BART-Large question generator, and then it is trained on SQuAD and AdversarialQA ( in a second stage of fine-tuning.


Training data: SQuAD + AdversarialQA Evaluation data: SQuAD + AdversarialQA

Training Process

Approx. 1 training epoch on the synthetic data and 2 training epochs on the manually-curated data.

Additional Information

Please refer to for full details. You can interact with the model on Dynabench here:

Downloads last month
Hosted inference API
Question Answering
This model can be loaded on the Inference API on-demand.