--- language: - en license: apache-2.0 tags: - question-answering datasets: - adversarial_qa - mbartolo/synQA - squad metrics: - exact_match - f1 model-index: - name: mbartolo/roberta-large-synqa-ext results: - task: type: question-answering name: Question Answering dataset: name: adversarial_qa type: adversarial_qa config: adversarialQA split: validation metrics: - type: exact_match value: 53.2 name: Exact Match verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZDVkOTMxYjYxMTkzYzY1ZWYwZWU2ZGEwNTFkNTVlYjQwYzYyNjkwNzM3Yzc5NzBhZWI3MjYxMzQ2ZmNhNWNhNCIsInZlcnNpb24iOjF9.A0BkWcotTRPButOduP74809_EjrMv55qZxG1t5vCXgp25FHmyaLJOVhAvwYckwWDoatkffRkPg63K45nedsXDA - type: f1 value: 64.6266 name: F1 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiODkyZjY3NTEwYTJmNmMyZGQzYTFkMjk5MDdlNWZmNWI4MGI5MDMxMTZkZWNhMGY1ZDdiMmM5YWRiOTAwMjgyOCIsInZlcnNpb24iOjF9.n7f0XdCOnszr4T_FnEuRhkW6_aG2IjLrkjkIaoPX1bB5UCk_veUoXqQWP728-AOSzxZwOaHv-7EjuCDip5h3Bw --- # Model Overview This is a RoBERTa-Large QA Model trained from https://huggingface.co/roberta-large in two stages. First, it is trained on synthetic adversarial data generated using a BART-Large question generator on Wikipedia passages from SQuAD as well as Wikipedia passages external to SQuAD, and then it is trained on SQuAD and AdversarialQA (https://arxiv.org/abs/2002.00293) in a second stage of fine-tuning. # Data Training data: SQuAD + AdversarialQA Evaluation data: SQuAD + AdversarialQA # Training Process Approx. 1 training epoch on the synthetic data and 2 training epochs on the manually-curated data. # Additional Information Please refer to https://arxiv.org/abs/2104.08678 for full details.