This is an ELECTRA-Large QA Model trained from https://huggingface.co/google/electra-large-discriminator in two stages. First, it is trained on synthetic adversarial data generated using a BART-Large question generator, and then it is trained on SQuAD and AdversarialQA (https://arxiv.org/abs/2002.00293) in a second stage of fine-tuning.
Training data: SQuAD + AdversarialQA Evaluation data: SQuAD + AdversarialQA
Approx. 1 training epoch on the synthetic data and 2 training epochs on the manually-curated data.
Please refer to https://arxiv.org/abs/2104.08678 for full details. You can interact with the model on Dynabench here: https://dynabench.org/models/109
- Downloads last month
This model can be loaded on the Inference API on-demand.
Datasets used to train mbartolo/electra-large-synqa
- Exact Match on squadvalidation set self-reported89.416
- F1 on squadvalidation set self-reported94.785