Model Overview
This is an ELECTRA-Large QA Model trained from https://huggingface.co/google/electra-large-discriminator in two stages. First, it is trained on synthetic adversarial data generated using a BART-Large question generator, and then it is trained on SQuAD and AdversarialQA (https://arxiv.org/abs/2002.00293) in a second stage of fine-tuning.
Data
Training data: SQuAD + AdversarialQA Evaluation data: SQuAD + AdversarialQA
Training Process
Approx. 1 training epoch on the synthetic data and 2 training epochs on the manually-curated data.
Additional Information
Please refer to https://arxiv.org/abs/2104.08678 for full details. You can interact with the model on Dynabench here: https://dynabench.org/models/109
- Downloads last month
- 40
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Datasets used to train mbartolo/electra-large-synqa
Evaluation results
- Exact Match on squadvalidation set self-reported89.416
- F1 on squadvalidation set self-reported94.785